00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 838 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3498 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.060 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.061 The recommended git tool is: git 00:00:00.061 using credential 00000000-0000-0000-0000-000000000002 00:00:00.063 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.185 Using shallow fetch with depth 1 00:00:00.186 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.186 > git --version # timeout=10 00:00:00.253 > git --version # 'git version 2.39.2' 00:00:00.253 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.311 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.311 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.710 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.723 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.735 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:05.735 > git config core.sparsecheckout # timeout=10 00:00:05.747 > git read-tree -mu HEAD # timeout=10 00:00:05.762 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:05.780 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:05.781 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:05.896 [Pipeline] Start of Pipeline 00:00:05.911 [Pipeline] library 00:00:05.913 Loading library shm_lib@master 00:00:05.913 Library shm_lib@master is cached. Copying from home. 00:00:05.929 [Pipeline] node 00:00:05.940 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.942 [Pipeline] { 00:00:05.949 [Pipeline] catchError 00:00:05.950 [Pipeline] { 00:00:05.962 [Pipeline] wrap 00:00:05.968 [Pipeline] { 00:00:05.975 [Pipeline] stage 00:00:05.976 [Pipeline] { (Prologue) 00:00:05.990 [Pipeline] echo 00:00:05.991 Node: VM-host-WFP1 00:00:05.996 [Pipeline] cleanWs 00:00:06.005 [WS-CLEANUP] Deleting project workspace... 00:00:06.005 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.012 [WS-CLEANUP] done 00:00:06.182 [Pipeline] setCustomBuildProperty 00:00:06.271 [Pipeline] httpRequest 00:00:06.634 [Pipeline] echo 00:00:06.635 Sorcerer 10.211.164.101 is alive 00:00:06.644 [Pipeline] retry 00:00:06.645 [Pipeline] { 00:00:06.658 [Pipeline] httpRequest 00:00:06.665 HttpMethod: GET 00:00:06.665 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:06.666 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:06.673 Response Code: HTTP/1.1 200 OK 00:00:06.674 Success: Status code 200 is in the accepted range: 200,404 00:00:06.674 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.866 [Pipeline] } 00:00:08.880 [Pipeline] // retry 00:00:08.888 [Pipeline] sh 00:00:09.169 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:09.185 [Pipeline] httpRequest 00:00:09.814 [Pipeline] echo 00:00:09.815 Sorcerer 10.211.164.101 is alive 00:00:09.825 [Pipeline] retry 00:00:09.827 [Pipeline] { 00:00:09.838 [Pipeline] httpRequest 00:00:09.841 HttpMethod: GET 00:00:09.842 URL: http://10.211.164.101/packages/spdk_e9b86137823c4255d2b9511d8465fe530a43c489.tar.gz 00:00:09.842 Sending request to url: http://10.211.164.101/packages/spdk_e9b86137823c4255d2b9511d8465fe530a43c489.tar.gz 00:00:09.857 Response Code: HTTP/1.1 200 OK 00:00:09.858 Success: Status code 200 is in the accepted range: 200,404 00:00:09.858 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e9b86137823c4255d2b9511d8465fe530a43c489.tar.gz 00:00:50.200 [Pipeline] } 00:00:50.218 [Pipeline] // retry 00:00:50.226 [Pipeline] sh 00:00:50.515 + tar --no-same-owner -xf spdk_e9b86137823c4255d2b9511d8465fe530a43c489.tar.gz 00:00:53.065 [Pipeline] sh 00:00:53.350 + git -C spdk log --oneline -n5 00:00:53.350 e9b861378 lib/iscsi: Fix: Unregister logout timer 00:00:53.350 081f43f2b lib/nvmf: Fix memory leak in nvmf_bdev_ctrlr_unmap 00:00:53.350 daeaec816 test/unit: remove unneeded MOCKs from ftl unit tests 00:00:53.350 78f92084e module/bdev: dump more info about compress 00:00:53.350 5e156a6e7 nvmf/rdma: fix last_wqe_reached ctx handling 00:00:53.371 [Pipeline] withCredentials 00:00:53.383 > git --version # timeout=10 00:00:53.396 > git --version # 'git version 2.39.2' 00:00:53.414 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:53.416 [Pipeline] { 00:00:53.426 [Pipeline] retry 00:00:53.428 [Pipeline] { 00:00:53.444 [Pipeline] sh 00:00:53.730 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:57.035 [Pipeline] } 00:00:57.053 [Pipeline] // retry 00:00:57.059 [Pipeline] } 00:00:57.075 [Pipeline] // withCredentials 00:00:57.086 [Pipeline] httpRequest 00:00:57.517 [Pipeline] echo 00:00:57.519 Sorcerer 10.211.164.101 is alive 00:00:57.529 [Pipeline] retry 00:00:57.531 [Pipeline] { 00:00:57.545 [Pipeline] httpRequest 00:00:57.549 HttpMethod: GET 00:00:57.550 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:57.550 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:57.555 Response Code: HTTP/1.1 200 OK 00:00:57.556 Success: Status code 200 is in the accepted range: 200,404 00:00:57.556 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:39.054 [Pipeline] } 00:01:39.072 [Pipeline] // retry 00:01:39.081 [Pipeline] sh 00:01:39.366 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:40.756 [Pipeline] sh 00:01:41.038 + git -C dpdk log --oneline -n5 00:01:41.038 eeb0605f11 version: 23.11.0 00:01:41.038 238778122a doc: update release notes for 23.11 00:01:41.038 46aa6b3cfc doc: fix description of RSS features 00:01:41.038 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:41.038 7e421ae345 devtools: support skipping forbid rule check 00:01:41.055 [Pipeline] writeFile 00:01:41.070 [Pipeline] sh 00:01:41.353 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:41.364 [Pipeline] sh 00:01:41.645 + cat autorun-spdk.conf 00:01:41.645 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.645 SPDK_TEST_NVME=1 00:01:41.645 SPDK_TEST_FTL=1 00:01:41.645 SPDK_TEST_ISAL=1 00:01:41.645 SPDK_RUN_ASAN=1 00:01:41.645 SPDK_RUN_UBSAN=1 00:01:41.645 SPDK_TEST_XNVME=1 00:01:41.645 SPDK_TEST_NVME_FDP=1 00:01:41.645 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:41.645 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:41.645 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.653 RUN_NIGHTLY=1 00:01:41.655 [Pipeline] } 00:01:41.668 [Pipeline] // stage 00:01:41.682 [Pipeline] stage 00:01:41.684 [Pipeline] { (Run VM) 00:01:41.697 [Pipeline] sh 00:01:41.978 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:41.978 + echo 'Start stage prepare_nvme.sh' 00:01:41.978 Start stage prepare_nvme.sh 00:01:41.978 + [[ -n 4 ]] 00:01:41.978 + disk_prefix=ex4 00:01:41.978 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:41.978 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:41.978 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:41.978 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.978 ++ SPDK_TEST_NVME=1 00:01:41.978 ++ SPDK_TEST_FTL=1 00:01:41.978 ++ SPDK_TEST_ISAL=1 00:01:41.978 ++ SPDK_RUN_ASAN=1 00:01:41.978 ++ SPDK_RUN_UBSAN=1 00:01:41.978 ++ SPDK_TEST_XNVME=1 00:01:41.978 ++ SPDK_TEST_NVME_FDP=1 00:01:41.978 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:41.978 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:41.978 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.978 ++ RUN_NIGHTLY=1 00:01:41.978 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:41.978 + nvme_files=() 00:01:41.978 + declare -A nvme_files 00:01:41.978 + backend_dir=/var/lib/libvirt/images/backends 00:01:41.978 + nvme_files['nvme.img']=5G 00:01:41.978 + nvme_files['nvme-cmb.img']=5G 00:01:41.978 + nvme_files['nvme-multi0.img']=4G 00:01:41.978 + nvme_files['nvme-multi1.img']=4G 00:01:41.978 + nvme_files['nvme-multi2.img']=4G 00:01:41.978 + nvme_files['nvme-openstack.img']=8G 00:01:41.978 + nvme_files['nvme-zns.img']=5G 00:01:41.978 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:41.978 + (( SPDK_TEST_FTL == 1 )) 00:01:41.978 + nvme_files["nvme-ftl.img"]=6G 00:01:41.978 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:41.978 + nvme_files["nvme-fdp.img"]=1G 00:01:41.978 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:41.978 + for nvme in "${!nvme_files[@]}" 00:01:41.978 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:41.978 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.978 + for nvme in "${!nvme_files[@]}" 00:01:41.978 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:41.978 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:41.978 + for nvme in "${!nvme_files[@]}" 00:01:41.978 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:42.236 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.236 + for nvme in "${!nvme_files[@]}" 00:01:42.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:42.236 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:42.236 + for nvme in "${!nvme_files[@]}" 00:01:42.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:42.236 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.236 + for nvme in "${!nvme_files[@]}" 00:01:42.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:42.236 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:42.236 + for nvme in "${!nvme_files[@]}" 00:01:42.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:42.236 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:42.236 + for nvme in "${!nvme_files[@]}" 00:01:42.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:42.495 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:42.495 + for nvme in "${!nvme_files[@]}" 00:01:42.495 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:42.495 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.495 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:42.495 + echo 'End stage prepare_nvme.sh' 00:01:42.495 End stage prepare_nvme.sh 00:01:42.508 [Pipeline] sh 00:01:42.838 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:42.838 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:43.097 00:01:43.097 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:43.097 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:43.097 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:43.097 HELP=0 00:01:43.097 DRY_RUN=0 00:01:43.097 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:43.097 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:43.097 NVME_AUTO_CREATE=0 00:01:43.097 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:43.097 NVME_CMB=,,,, 00:01:43.097 NVME_PMR=,,,, 00:01:43.097 NVME_ZNS=,,,, 00:01:43.097 NVME_MS=true,,,, 00:01:43.097 NVME_FDP=,,,on, 00:01:43.097 SPDK_VAGRANT_DISTRO=fedora39 00:01:43.097 SPDK_VAGRANT_VMCPU=10 00:01:43.097 SPDK_VAGRANT_VMRAM=12288 00:01:43.097 SPDK_VAGRANT_PROVIDER=libvirt 00:01:43.097 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:43.097 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:43.097 SPDK_OPENSTACK_NETWORK=0 00:01:43.097 VAGRANT_PACKAGE_BOX=0 00:01:43.097 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:43.097 FORCE_DISTRO=true 00:01:43.097 VAGRANT_BOX_VERSION= 00:01:43.097 EXTRA_VAGRANTFILES= 00:01:43.097 NIC_MODEL=e1000 00:01:43.097 00:01:43.097 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:43.097 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:45.633 Bringing machine 'default' up with 'libvirt' provider... 00:01:47.015 ==> default: Creating image (snapshot of base box volume). 00:01:47.015 ==> default: Creating domain with the following settings... 00:01:47.015 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727794904_6a3bf8e5aab1e630adcb 00:01:47.015 ==> default: -- Domain type: kvm 00:01:47.015 ==> default: -- Cpus: 10 00:01:47.015 ==> default: -- Feature: acpi 00:01:47.015 ==> default: -- Feature: apic 00:01:47.015 ==> default: -- Feature: pae 00:01:47.015 ==> default: -- Memory: 12288M 00:01:47.015 ==> default: -- Memory Backing: hugepages: 00:01:47.015 ==> default: -- Management MAC: 00:01:47.015 ==> default: -- Loader: 00:01:47.015 ==> default: -- Nvram: 00:01:47.015 ==> default: -- Base box: spdk/fedora39 00:01:47.015 ==> default: -- Storage pool: default 00:01:47.015 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727794904_6a3bf8e5aab1e630adcb.img (20G) 00:01:47.015 ==> default: -- Volume Cache: default 00:01:47.015 ==> default: -- Kernel: 00:01:47.015 ==> default: -- Initrd: 00:01:47.015 ==> default: -- Graphics Type: vnc 00:01:47.015 ==> default: -- Graphics Port: -1 00:01:47.015 ==> default: -- Graphics IP: 127.0.0.1 00:01:47.015 ==> default: -- Graphics Password: Not defined 00:01:47.015 ==> default: -- Video Type: cirrus 00:01:47.015 ==> default: -- Video VRAM: 9216 00:01:47.015 ==> default: -- Sound Type: 00:01:47.015 ==> default: -- Keymap: en-us 00:01:47.015 ==> default: -- TPM Path: 00:01:47.015 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:47.015 ==> default: -- Command line args: 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:47.015 ==> default: -> value=-drive, 00:01:47.015 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:47.015 ==> default: -> value=-drive, 00:01:47.015 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:47.015 ==> default: -> value=-drive, 00:01:47.015 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.015 ==> default: -> value=-drive, 00:01:47.015 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.015 ==> default: -> value=-drive, 00:01:47.015 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:47.015 ==> default: -> value=-drive, 00:01:47.015 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:47.015 ==> default: -> value=-device, 00:01:47.015 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.586 ==> default: Creating shared folders metadata... 00:01:47.586 ==> default: Starting domain. 00:01:49.493 ==> default: Waiting for domain to get an IP address... 00:02:07.587 ==> default: Waiting for SSH to become available... 00:02:07.587 ==> default: Configuring and enabling network interfaces... 00:02:12.866 default: SSH address: 192.168.121.125:22 00:02:12.866 default: SSH username: vagrant 00:02:12.866 default: SSH auth method: private key 00:02:15.408 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:23.560 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:31.691 ==> default: Mounting SSHFS shared folder... 00:02:33.071 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:33.071 ==> default: Checking Mount.. 00:02:34.980 ==> default: Folder Successfully Mounted! 00:02:34.980 ==> default: Running provisioner: file... 00:02:35.918 default: ~/.gitconfig => .gitconfig 00:02:36.488 00:02:36.488 SUCCESS! 00:02:36.488 00:02:36.488 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:36.488 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:36.488 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:36.488 00:02:36.497 [Pipeline] } 00:02:36.513 [Pipeline] // stage 00:02:36.522 [Pipeline] dir 00:02:36.523 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:36.524 [Pipeline] { 00:02:36.536 [Pipeline] catchError 00:02:36.538 [Pipeline] { 00:02:36.548 [Pipeline] sh 00:02:36.859 + vagrant ssh-config --host vagrant 00:02:36.859 + sed -ne /^Host/,$p 00:02:36.859 + tee ssh_conf 00:02:39.514 Host vagrant 00:02:39.514 HostName 192.168.121.125 00:02:39.514 User vagrant 00:02:39.514 Port 22 00:02:39.514 UserKnownHostsFile /dev/null 00:02:39.514 StrictHostKeyChecking no 00:02:39.514 PasswordAuthentication no 00:02:39.514 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:39.514 IdentitiesOnly yes 00:02:39.514 LogLevel FATAL 00:02:39.514 ForwardAgent yes 00:02:39.514 ForwardX11 yes 00:02:39.514 00:02:39.527 [Pipeline] withEnv 00:02:39.529 [Pipeline] { 00:02:39.541 [Pipeline] sh 00:02:39.824 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:39.824 source /etc/os-release 00:02:39.824 [[ -e /image.version ]] && img=$(< /image.version) 00:02:39.824 # Minimal, systemd-like check. 00:02:39.824 if [[ -e /.dockerenv ]]; then 00:02:39.824 # Clear garbage from the node's name: 00:02:39.824 # agt-er_autotest_547-896 -> autotest_547-896 00:02:39.824 # $HOSTNAME is the actual container id 00:02:39.824 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:39.824 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:39.824 # We can assume this is a mount from a host where container is running, 00:02:39.824 # so fetch its hostname to easily identify the target swarm worker. 00:02:39.824 container="$(< /etc/hostname) ($agent)" 00:02:39.824 else 00:02:39.824 # Fallback 00:02:39.824 container=$agent 00:02:39.824 fi 00:02:39.824 fi 00:02:39.824 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:39.824 00:02:40.095 [Pipeline] } 00:02:40.105 [Pipeline] // withEnv 00:02:40.110 [Pipeline] setCustomBuildProperty 00:02:40.120 [Pipeline] stage 00:02:40.122 [Pipeline] { (Tests) 00:02:40.137 [Pipeline] sh 00:02:40.414 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:40.682 [Pipeline] sh 00:02:40.958 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:41.230 [Pipeline] timeout 00:02:41.231 Timeout set to expire in 50 min 00:02:41.232 [Pipeline] { 00:02:41.242 [Pipeline] sh 00:02:41.520 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:42.088 HEAD is now at e9b861378 lib/iscsi: Fix: Unregister logout timer 00:02:42.098 [Pipeline] sh 00:02:42.375 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:42.647 [Pipeline] sh 00:02:42.960 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:43.238 [Pipeline] sh 00:02:43.522 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:43.781 ++ readlink -f spdk_repo 00:02:43.781 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:43.781 + [[ -n /home/vagrant/spdk_repo ]] 00:02:43.781 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:43.781 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:43.781 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:43.781 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:43.781 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:43.781 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:43.781 + cd /home/vagrant/spdk_repo 00:02:43.781 + source /etc/os-release 00:02:43.781 ++ NAME='Fedora Linux' 00:02:43.781 ++ VERSION='39 (Cloud Edition)' 00:02:43.781 ++ ID=fedora 00:02:43.781 ++ VERSION_ID=39 00:02:43.781 ++ VERSION_CODENAME= 00:02:43.781 ++ PLATFORM_ID=platform:f39 00:02:43.781 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:43.781 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:43.781 ++ LOGO=fedora-logo-icon 00:02:43.781 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:43.781 ++ HOME_URL=https://fedoraproject.org/ 00:02:43.781 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:43.781 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:43.781 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:43.781 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:43.781 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:43.781 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:43.781 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:43.781 ++ SUPPORT_END=2024-11-12 00:02:43.781 ++ VARIANT='Cloud Edition' 00:02:43.781 ++ VARIANT_ID=cloud 00:02:43.781 + uname -a 00:02:43.781 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:43.781 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:44.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:44.622 Hugepages 00:02:44.622 node hugesize free / total 00:02:44.622 node0 1048576kB 0 / 0 00:02:44.622 node0 2048kB 0 / 0 00:02:44.622 00:02:44.622 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:44.622 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:44.622 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:44.879 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:44.879 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:44.879 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:44.879 + rm -f /tmp/spdk-ld-path 00:02:44.880 + source autorun-spdk.conf 00:02:44.880 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.880 ++ SPDK_TEST_NVME=1 00:02:44.880 ++ SPDK_TEST_FTL=1 00:02:44.880 ++ SPDK_TEST_ISAL=1 00:02:44.880 ++ SPDK_RUN_ASAN=1 00:02:44.880 ++ SPDK_RUN_UBSAN=1 00:02:44.880 ++ SPDK_TEST_XNVME=1 00:02:44.880 ++ SPDK_TEST_NVME_FDP=1 00:02:44.880 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:44.880 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:44.880 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:44.880 ++ RUN_NIGHTLY=1 00:02:44.880 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:44.880 + [[ -n '' ]] 00:02:44.880 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:44.880 + for M in /var/spdk/build-*-manifest.txt 00:02:44.880 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:44.880 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:44.880 + for M in /var/spdk/build-*-manifest.txt 00:02:44.880 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:44.880 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:44.880 + for M in /var/spdk/build-*-manifest.txt 00:02:44.880 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:44.880 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:44.880 ++ uname 00:02:44.880 + [[ Linux == \L\i\n\u\x ]] 00:02:44.880 + sudo dmesg -T 00:02:44.880 + sudo dmesg --clear 00:02:44.880 + dmesg_pid=5981 00:02:44.880 + [[ Fedora Linux == FreeBSD ]] 00:02:44.880 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:44.880 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:44.880 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:44.880 + sudo dmesg -Tw 00:02:44.880 + [[ -x /usr/src/fio-static/fio ]] 00:02:44.880 + export FIO_BIN=/usr/src/fio-static/fio 00:02:44.880 + FIO_BIN=/usr/src/fio-static/fio 00:02:44.880 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:44.880 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:44.880 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:44.880 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:44.880 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:44.880 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:45.138 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:45.138 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:45.138 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:45.138 Test configuration: 00:02:45.138 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.138 SPDK_TEST_NVME=1 00:02:45.138 SPDK_TEST_FTL=1 00:02:45.138 SPDK_TEST_ISAL=1 00:02:45.138 SPDK_RUN_ASAN=1 00:02:45.138 SPDK_RUN_UBSAN=1 00:02:45.138 SPDK_TEST_XNVME=1 00:02:45.138 SPDK_TEST_NVME_FDP=1 00:02:45.138 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:45.138 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:45.138 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:45.138 RUN_NIGHTLY=1 15:02:43 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:45.138 15:02:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:45.138 15:02:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:45.138 15:02:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:45.138 15:02:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:45.138 15:02:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:45.138 15:02:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.138 15:02:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.138 15:02:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.138 15:02:43 -- paths/export.sh@5 -- $ export PATH 00:02:45.138 15:02:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.138 15:02:43 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:45.138 15:02:43 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:45.138 15:02:43 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727794963.XXXXXX 00:02:45.138 15:02:43 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727794963.WsJ4F2 00:02:45.138 15:02:43 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:45.138 15:02:43 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:45.138 15:02:43 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:45.138 15:02:43 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:45.138 15:02:43 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:45.138 15:02:43 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:45.138 15:02:43 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:45.138 15:02:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:45.138 15:02:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.138 15:02:43 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:45.138 15:02:43 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:45.138 15:02:43 -- pm/common@17 -- $ local monitor 00:02:45.138 15:02:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.138 15:02:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.138 15:02:43 -- pm/common@25 -- $ sleep 1 00:02:45.138 15:02:43 -- pm/common@21 -- $ date +%s 00:02:45.138 15:02:43 -- pm/common@21 -- $ date +%s 00:02:45.138 15:02:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727794963 00:02:45.138 15:02:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727794963 00:02:45.396 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727794963_collect-vmstat.pm.log 00:02:45.396 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727794963_collect-cpu-load.pm.log 00:02:46.336 15:02:44 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:46.336 15:02:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:46.336 15:02:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:46.336 15:02:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:46.336 15:02:44 -- spdk/autobuild.sh@16 -- $ date -u 00:02:46.336 Tue Oct 1 03:02:44 PM UTC 2024 00:02:46.336 15:02:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:46.336 v25.01-pre-23-ge9b861378 00:02:46.336 15:02:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:46.336 15:02:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:46.336 15:02:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:46.336 15:02:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:46.336 15:02:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.336 ************************************ 00:02:46.336 START TEST asan 00:02:46.336 ************************************ 00:02:46.336 using asan 00:02:46.336 15:02:44 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:46.336 00:02:46.336 real 0m0.001s 00:02:46.336 user 0m0.001s 00:02:46.336 sys 0m0.000s 00:02:46.336 15:02:44 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:46.336 ************************************ 00:02:46.336 END TEST asan 00:02:46.336 ************************************ 00:02:46.336 15:02:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:46.336 15:02:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:46.336 15:02:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:46.336 15:02:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:46.336 15:02:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:46.336 15:02:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.336 ************************************ 00:02:46.336 START TEST ubsan 00:02:46.336 ************************************ 00:02:46.336 using ubsan 00:02:46.336 15:02:44 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:46.336 00:02:46.336 real 0m0.001s 00:02:46.336 user 0m0.001s 00:02:46.336 sys 0m0.000s 00:02:46.336 15:02:44 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:46.336 ************************************ 00:02:46.336 END TEST ubsan 00:02:46.336 15:02:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:46.336 ************************************ 00:02:46.336 15:02:44 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:46.336 15:02:44 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:46.336 15:02:44 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:46.336 15:02:44 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:46.336 15:02:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:46.336 15:02:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.336 ************************************ 00:02:46.336 START TEST build_native_dpdk 00:02:46.336 ************************************ 00:02:46.336 15:02:44 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:46.336 eeb0605f11 version: 23.11.0 00:02:46.336 238778122a doc: update release notes for 23.11 00:02:46.336 46aa6b3cfc doc: fix description of RSS features 00:02:46.336 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:46.336 7e421ae345 devtools: support skipping forbid rule check 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:46.336 15:02:44 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.596 15:02:44 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:46.596 patching file config/rte_config.h 00:02:46.596 Hunk #1 succeeded at 60 (offset 1 line). 00:02:46.596 15:02:44 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:46.597 15:02:44 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:46.597 patching file lib/pcapng/rte_pcapng.c 00:02:46.597 15:02:44 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:46.597 15:02:44 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:46.597 15:02:44 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:46.597 15:02:44 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:46.597 15:02:44 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:46.597 15:02:44 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:46.597 15:02:44 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:53.196 The Meson build system 00:02:53.197 Version: 1.5.0 00:02:53.197 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:53.197 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:53.197 Build type: native build 00:02:53.197 Program cat found: YES (/usr/bin/cat) 00:02:53.197 Project name: DPDK 00:02:53.197 Project version: 23.11.0 00:02:53.197 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:53.197 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:53.197 Host machine cpu family: x86_64 00:02:53.197 Host machine cpu: x86_64 00:02:53.197 Message: ## Building in Developer Mode ## 00:02:53.197 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:53.197 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:53.197 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:53.197 Program python3 found: YES (/usr/bin/python3) 00:02:53.197 Program cat found: YES (/usr/bin/cat) 00:02:53.197 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:53.197 Compiler for C supports arguments -march=native: YES 00:02:53.197 Checking for size of "void *" : 8 00:02:53.197 Checking for size of "void *" : 8 (cached) 00:02:53.197 Library m found: YES 00:02:53.197 Library numa found: YES 00:02:53.197 Has header "numaif.h" : YES 00:02:53.197 Library fdt found: NO 00:02:53.197 Library execinfo found: NO 00:02:53.197 Has header "execinfo.h" : YES 00:02:53.197 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:53.197 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:53.197 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:53.197 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:53.197 Run-time dependency openssl found: YES 3.1.1 00:02:53.197 Run-time dependency libpcap found: YES 1.10.4 00:02:53.197 Has header "pcap.h" with dependency libpcap: YES 00:02:53.197 Compiler for C supports arguments -Wcast-qual: YES 00:02:53.197 Compiler for C supports arguments -Wdeprecated: YES 00:02:53.197 Compiler for C supports arguments -Wformat: YES 00:02:53.197 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:53.197 Compiler for C supports arguments -Wformat-security: NO 00:02:53.197 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.197 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:53.197 Compiler for C supports arguments -Wnested-externs: YES 00:02:53.197 Compiler for C supports arguments -Wold-style-definition: YES 00:02:53.197 Compiler for C supports arguments -Wpointer-arith: YES 00:02:53.197 Compiler for C supports arguments -Wsign-compare: YES 00:02:53.197 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:53.197 Compiler for C supports arguments -Wundef: YES 00:02:53.197 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.197 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:53.197 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:53.197 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.197 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:53.197 Program objdump found: YES (/usr/bin/objdump) 00:02:53.197 Compiler for C supports arguments -mavx512f: YES 00:02:53.197 Checking if "AVX512 checking" compiles: YES 00:02:53.197 Fetching value of define "__SSE4_2__" : 1 00:02:53.197 Fetching value of define "__AES__" : 1 00:02:53.197 Fetching value of define "__AVX__" : 1 00:02:53.197 Fetching value of define "__AVX2__" : 1 00:02:53.197 Fetching value of define "__AVX512BW__" : 1 00:02:53.197 Fetching value of define "__AVX512CD__" : 1 00:02:53.197 Fetching value of define "__AVX512DQ__" : 1 00:02:53.197 Fetching value of define "__AVX512F__" : 1 00:02:53.197 Fetching value of define "__AVX512VL__" : 1 00:02:53.197 Fetching value of define "__PCLMUL__" : 1 00:02:53.197 Fetching value of define "__RDRND__" : 1 00:02:53.197 Fetching value of define "__RDSEED__" : 1 00:02:53.197 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:53.197 Fetching value of define "__znver1__" : (undefined) 00:02:53.197 Fetching value of define "__znver2__" : (undefined) 00:02:53.197 Fetching value of define "__znver3__" : (undefined) 00:02:53.197 Fetching value of define "__znver4__" : (undefined) 00:02:53.197 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:53.197 Message: lib/log: Defining dependency "log" 00:02:53.197 Message: lib/kvargs: Defining dependency "kvargs" 00:02:53.197 Message: lib/telemetry: Defining dependency "telemetry" 00:02:53.197 Checking for function "getentropy" : NO 00:02:53.197 Message: lib/eal: Defining dependency "eal" 00:02:53.197 Message: lib/ring: Defining dependency "ring" 00:02:53.197 Message: lib/rcu: Defining dependency "rcu" 00:02:53.197 Message: lib/mempool: Defining dependency "mempool" 00:02:53.197 Message: lib/mbuf: Defining dependency "mbuf" 00:02:53.197 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:53.197 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:53.197 Compiler for C supports arguments -mpclmul: YES 00:02:53.197 Compiler for C supports arguments -maes: YES 00:02:53.197 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:53.197 Compiler for C supports arguments -mavx512bw: YES 00:02:53.197 Compiler for C supports arguments -mavx512dq: YES 00:02:53.197 Compiler for C supports arguments -mavx512vl: YES 00:02:53.197 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:53.197 Compiler for C supports arguments -mavx2: YES 00:02:53.197 Compiler for C supports arguments -mavx: YES 00:02:53.197 Message: lib/net: Defining dependency "net" 00:02:53.197 Message: lib/meter: Defining dependency "meter" 00:02:53.197 Message: lib/ethdev: Defining dependency "ethdev" 00:02:53.197 Message: lib/pci: Defining dependency "pci" 00:02:53.197 Message: lib/cmdline: Defining dependency "cmdline" 00:02:53.197 Message: lib/metrics: Defining dependency "metrics" 00:02:53.197 Message: lib/hash: Defining dependency "hash" 00:02:53.197 Message: lib/timer: Defining dependency "timer" 00:02:53.197 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:53.197 Message: lib/acl: Defining dependency "acl" 00:02:53.197 Message: lib/bbdev: Defining dependency "bbdev" 00:02:53.197 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:53.197 Run-time dependency libelf found: YES 0.191 00:02:53.197 Message: lib/bpf: Defining dependency "bpf" 00:02:53.197 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:53.197 Message: lib/compressdev: Defining dependency "compressdev" 00:02:53.197 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:53.197 Message: lib/distributor: Defining dependency "distributor" 00:02:53.197 Message: lib/dmadev: Defining dependency "dmadev" 00:02:53.197 Message: lib/efd: Defining dependency "efd" 00:02:53.197 Message: lib/eventdev: Defining dependency "eventdev" 00:02:53.197 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:53.197 Message: lib/gpudev: Defining dependency "gpudev" 00:02:53.197 Message: lib/gro: Defining dependency "gro" 00:02:53.197 Message: lib/gso: Defining dependency "gso" 00:02:53.197 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:53.197 Message: lib/jobstats: Defining dependency "jobstats" 00:02:53.197 Message: lib/latencystats: Defining dependency "latencystats" 00:02:53.197 Message: lib/lpm: Defining dependency "lpm" 00:02:53.197 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:53.197 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:53.197 Message: lib/member: Defining dependency "member" 00:02:53.197 Message: lib/pcapng: Defining dependency "pcapng" 00:02:53.197 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:53.197 Message: lib/power: Defining dependency "power" 00:02:53.197 Message: lib/rawdev: Defining dependency "rawdev" 00:02:53.197 Message: lib/regexdev: Defining dependency "regexdev" 00:02:53.197 Message: lib/mldev: Defining dependency "mldev" 00:02:53.197 Message: lib/rib: Defining dependency "rib" 00:02:53.197 Message: lib/reorder: Defining dependency "reorder" 00:02:53.197 Message: lib/sched: Defining dependency "sched" 00:02:53.197 Message: lib/security: Defining dependency "security" 00:02:53.197 Message: lib/stack: Defining dependency "stack" 00:02:53.197 Has header "linux/userfaultfd.h" : YES 00:02:53.197 Has header "linux/vduse.h" : YES 00:02:53.197 Message: lib/vhost: Defining dependency "vhost" 00:02:53.197 Message: lib/ipsec: Defining dependency "ipsec" 00:02:53.197 Message: lib/pdcp: Defining dependency "pdcp" 00:02:53.197 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:53.197 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:53.197 Message: lib/fib: Defining dependency "fib" 00:02:53.197 Message: lib/port: Defining dependency "port" 00:02:53.197 Message: lib/pdump: Defining dependency "pdump" 00:02:53.197 Message: lib/table: Defining dependency "table" 00:02:53.197 Message: lib/pipeline: Defining dependency "pipeline" 00:02:53.197 Message: lib/graph: Defining dependency "graph" 00:02:53.197 Message: lib/node: Defining dependency "node" 00:02:53.197 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:53.197 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:53.197 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:54.136 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:54.136 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:54.136 Compiler for C supports arguments -Wno-unused-value: YES 00:02:54.136 Compiler for C supports arguments -Wno-format: YES 00:02:54.136 Compiler for C supports arguments -Wno-format-security: YES 00:02:54.136 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:54.136 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:54.136 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:54.136 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:54.136 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:54.136 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:54.136 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:54.136 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:54.136 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:54.136 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:54.136 Has header "sys/epoll.h" : YES 00:02:54.136 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:54.136 Configuring doxy-api-html.conf using configuration 00:02:54.136 Configuring doxy-api-man.conf using configuration 00:02:54.136 Program mandb found: YES (/usr/bin/mandb) 00:02:54.136 Program sphinx-build found: NO 00:02:54.136 Configuring rte_build_config.h using configuration 00:02:54.136 Message: 00:02:54.136 ================= 00:02:54.136 Applications Enabled 00:02:54.136 ================= 00:02:54.136 00:02:54.136 apps: 00:02:54.136 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:54.136 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:54.136 test-pmd, test-regex, test-sad, test-security-perf, 00:02:54.136 00:02:54.136 Message: 00:02:54.136 ================= 00:02:54.136 Libraries Enabled 00:02:54.136 ================= 00:02:54.136 00:02:54.136 libs: 00:02:54.136 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:54.136 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:54.136 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:54.136 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:54.136 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:54.136 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:54.136 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:54.136 00:02:54.136 00:02:54.136 Message: 00:02:54.136 =============== 00:02:54.136 Drivers Enabled 00:02:54.136 =============== 00:02:54.136 00:02:54.136 common: 00:02:54.136 00:02:54.136 bus: 00:02:54.136 pci, vdev, 00:02:54.136 mempool: 00:02:54.136 ring, 00:02:54.136 dma: 00:02:54.136 00:02:54.136 net: 00:02:54.136 i40e, 00:02:54.136 raw: 00:02:54.136 00:02:54.136 crypto: 00:02:54.136 00:02:54.136 compress: 00:02:54.136 00:02:54.136 regex: 00:02:54.136 00:02:54.136 ml: 00:02:54.136 00:02:54.136 vdpa: 00:02:54.136 00:02:54.136 event: 00:02:54.136 00:02:54.136 baseband: 00:02:54.136 00:02:54.136 gpu: 00:02:54.136 00:02:54.136 00:02:54.136 Message: 00:02:54.136 ================= 00:02:54.136 Content Skipped 00:02:54.136 ================= 00:02:54.136 00:02:54.136 apps: 00:02:54.136 00:02:54.136 libs: 00:02:54.136 00:02:54.136 drivers: 00:02:54.136 common/cpt: not in enabled drivers build config 00:02:54.136 common/dpaax: not in enabled drivers build config 00:02:54.136 common/iavf: not in enabled drivers build config 00:02:54.136 common/idpf: not in enabled drivers build config 00:02:54.136 common/mvep: not in enabled drivers build config 00:02:54.136 common/octeontx: not in enabled drivers build config 00:02:54.136 bus/auxiliary: not in enabled drivers build config 00:02:54.136 bus/cdx: not in enabled drivers build config 00:02:54.136 bus/dpaa: not in enabled drivers build config 00:02:54.136 bus/fslmc: not in enabled drivers build config 00:02:54.136 bus/ifpga: not in enabled drivers build config 00:02:54.136 bus/platform: not in enabled drivers build config 00:02:54.136 bus/vmbus: not in enabled drivers build config 00:02:54.136 common/cnxk: not in enabled drivers build config 00:02:54.136 common/mlx5: not in enabled drivers build config 00:02:54.136 common/nfp: not in enabled drivers build config 00:02:54.136 common/qat: not in enabled drivers build config 00:02:54.136 common/sfc_efx: not in enabled drivers build config 00:02:54.136 mempool/bucket: not in enabled drivers build config 00:02:54.136 mempool/cnxk: not in enabled drivers build config 00:02:54.136 mempool/dpaa: not in enabled drivers build config 00:02:54.136 mempool/dpaa2: not in enabled drivers build config 00:02:54.136 mempool/octeontx: not in enabled drivers build config 00:02:54.136 mempool/stack: not in enabled drivers build config 00:02:54.136 dma/cnxk: not in enabled drivers build config 00:02:54.136 dma/dpaa: not in enabled drivers build config 00:02:54.136 dma/dpaa2: not in enabled drivers build config 00:02:54.136 dma/hisilicon: not in enabled drivers build config 00:02:54.136 dma/idxd: not in enabled drivers build config 00:02:54.136 dma/ioat: not in enabled drivers build config 00:02:54.136 dma/skeleton: not in enabled drivers build config 00:02:54.136 net/af_packet: not in enabled drivers build config 00:02:54.136 net/af_xdp: not in enabled drivers build config 00:02:54.136 net/ark: not in enabled drivers build config 00:02:54.136 net/atlantic: not in enabled drivers build config 00:02:54.136 net/avp: not in enabled drivers build config 00:02:54.136 net/axgbe: not in enabled drivers build config 00:02:54.136 net/bnx2x: not in enabled drivers build config 00:02:54.136 net/bnxt: not in enabled drivers build config 00:02:54.136 net/bonding: not in enabled drivers build config 00:02:54.136 net/cnxk: not in enabled drivers build config 00:02:54.136 net/cpfl: not in enabled drivers build config 00:02:54.136 net/cxgbe: not in enabled drivers build config 00:02:54.136 net/dpaa: not in enabled drivers build config 00:02:54.136 net/dpaa2: not in enabled drivers build config 00:02:54.136 net/e1000: not in enabled drivers build config 00:02:54.136 net/ena: not in enabled drivers build config 00:02:54.136 net/enetc: not in enabled drivers build config 00:02:54.136 net/enetfec: not in enabled drivers build config 00:02:54.136 net/enic: not in enabled drivers build config 00:02:54.136 net/failsafe: not in enabled drivers build config 00:02:54.136 net/fm10k: not in enabled drivers build config 00:02:54.136 net/gve: not in enabled drivers build config 00:02:54.136 net/hinic: not in enabled drivers build config 00:02:54.136 net/hns3: not in enabled drivers build config 00:02:54.136 net/iavf: not in enabled drivers build config 00:02:54.136 net/ice: not in enabled drivers build config 00:02:54.136 net/idpf: not in enabled drivers build config 00:02:54.136 net/igc: not in enabled drivers build config 00:02:54.136 net/ionic: not in enabled drivers build config 00:02:54.136 net/ipn3ke: not in enabled drivers build config 00:02:54.136 net/ixgbe: not in enabled drivers build config 00:02:54.136 net/mana: not in enabled drivers build config 00:02:54.136 net/memif: not in enabled drivers build config 00:02:54.136 net/mlx4: not in enabled drivers build config 00:02:54.136 net/mlx5: not in enabled drivers build config 00:02:54.136 net/mvneta: not in enabled drivers build config 00:02:54.136 net/mvpp2: not in enabled drivers build config 00:02:54.136 net/netvsc: not in enabled drivers build config 00:02:54.136 net/nfb: not in enabled drivers build config 00:02:54.136 net/nfp: not in enabled drivers build config 00:02:54.136 net/ngbe: not in enabled drivers build config 00:02:54.137 net/null: not in enabled drivers build config 00:02:54.137 net/octeontx: not in enabled drivers build config 00:02:54.137 net/octeon_ep: not in enabled drivers build config 00:02:54.137 net/pcap: not in enabled drivers build config 00:02:54.137 net/pfe: not in enabled drivers build config 00:02:54.137 net/qede: not in enabled drivers build config 00:02:54.137 net/ring: not in enabled drivers build config 00:02:54.137 net/sfc: not in enabled drivers build config 00:02:54.137 net/softnic: not in enabled drivers build config 00:02:54.137 net/tap: not in enabled drivers build config 00:02:54.137 net/thunderx: not in enabled drivers build config 00:02:54.137 net/txgbe: not in enabled drivers build config 00:02:54.137 net/vdev_netvsc: not in enabled drivers build config 00:02:54.137 net/vhost: not in enabled drivers build config 00:02:54.137 net/virtio: not in enabled drivers build config 00:02:54.137 net/vmxnet3: not in enabled drivers build config 00:02:54.137 raw/cnxk_bphy: not in enabled drivers build config 00:02:54.137 raw/cnxk_gpio: not in enabled drivers build config 00:02:54.137 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:54.137 raw/ifpga: not in enabled drivers build config 00:02:54.137 raw/ntb: not in enabled drivers build config 00:02:54.137 raw/skeleton: not in enabled drivers build config 00:02:54.137 crypto/armv8: not in enabled drivers build config 00:02:54.137 crypto/bcmfs: not in enabled drivers build config 00:02:54.137 crypto/caam_jr: not in enabled drivers build config 00:02:54.137 crypto/ccp: not in enabled drivers build config 00:02:54.137 crypto/cnxk: not in enabled drivers build config 00:02:54.137 crypto/dpaa_sec: not in enabled drivers build config 00:02:54.137 crypto/dpaa2_sec: not in enabled drivers build config 00:02:54.137 crypto/ipsec_mb: not in enabled drivers build config 00:02:54.137 crypto/mlx5: not in enabled drivers build config 00:02:54.137 crypto/mvsam: not in enabled drivers build config 00:02:54.137 crypto/nitrox: not in enabled drivers build config 00:02:54.137 crypto/null: not in enabled drivers build config 00:02:54.137 crypto/octeontx: not in enabled drivers build config 00:02:54.137 crypto/openssl: not in enabled drivers build config 00:02:54.137 crypto/scheduler: not in enabled drivers build config 00:02:54.137 crypto/uadk: not in enabled drivers build config 00:02:54.137 crypto/virtio: not in enabled drivers build config 00:02:54.137 compress/isal: not in enabled drivers build config 00:02:54.137 compress/mlx5: not in enabled drivers build config 00:02:54.137 compress/octeontx: not in enabled drivers build config 00:02:54.137 compress/zlib: not in enabled drivers build config 00:02:54.137 regex/mlx5: not in enabled drivers build config 00:02:54.137 regex/cn9k: not in enabled drivers build config 00:02:54.137 ml/cnxk: not in enabled drivers build config 00:02:54.137 vdpa/ifc: not in enabled drivers build config 00:02:54.137 vdpa/mlx5: not in enabled drivers build config 00:02:54.137 vdpa/nfp: not in enabled drivers build config 00:02:54.137 vdpa/sfc: not in enabled drivers build config 00:02:54.137 event/cnxk: not in enabled drivers build config 00:02:54.137 event/dlb2: not in enabled drivers build config 00:02:54.137 event/dpaa: not in enabled drivers build config 00:02:54.137 event/dpaa2: not in enabled drivers build config 00:02:54.137 event/dsw: not in enabled drivers build config 00:02:54.137 event/opdl: not in enabled drivers build config 00:02:54.137 event/skeleton: not in enabled drivers build config 00:02:54.137 event/sw: not in enabled drivers build config 00:02:54.137 event/octeontx: not in enabled drivers build config 00:02:54.137 baseband/acc: not in enabled drivers build config 00:02:54.137 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:54.137 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:54.137 baseband/la12xx: not in enabled drivers build config 00:02:54.137 baseband/null: not in enabled drivers build config 00:02:54.137 baseband/turbo_sw: not in enabled drivers build config 00:02:54.137 gpu/cuda: not in enabled drivers build config 00:02:54.137 00:02:54.137 00:02:54.137 Build targets in project: 217 00:02:54.137 00:02:54.137 DPDK 23.11.0 00:02:54.137 00:02:54.137 User defined options 00:02:54.137 libdir : lib 00:02:54.137 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:54.137 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:54.137 c_link_args : 00:02:54.137 enable_docs : false 00:02:54.137 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:54.137 enable_kmods : false 00:02:54.137 machine : native 00:02:54.137 tests : false 00:02:54.137 00:02:54.137 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:54.137 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:54.137 15:02:52 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:54.137 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:54.137 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:54.137 [2/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:54.396 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:54.396 [4/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:54.396 [5/707] Linking static target lib/librte_kvargs.a 00:02:54.396 [6/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:54.396 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:54.396 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:54.396 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:54.396 [10/707] Linking static target lib/librte_log.a 00:02:54.396 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.656 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:54.656 [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:54.656 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:54.656 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:54.656 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:54.914 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.914 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:54.914 [19/707] Linking target lib/librte_log.so.24.0 00:02:54.914 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:54.914 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:54.914 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:54.914 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:54.914 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:54.914 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:55.174 [26/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:55.174 [27/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:55.174 [28/707] Linking static target lib/librte_telemetry.a 00:02:55.174 [29/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:55.174 [30/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:55.174 [31/707] Linking target lib/librte_kvargs.so.24.0 00:02:55.174 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:55.174 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:55.434 [34/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:55.434 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:55.434 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:55.434 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:55.434 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:55.434 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:55.434 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:55.434 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:55.434 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.434 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:55.434 [44/707] Linking target lib/librte_telemetry.so.24.0 00:02:55.693 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:55.693 [46/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:55.693 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:55.953 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:55.953 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:55.953 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:55.953 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:55.953 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:55.953 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:55.953 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.953 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:55.953 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:55.953 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:56.213 [58/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:56.213 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:56.213 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:56.213 [61/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:56.213 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:56.213 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:56.213 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:56.213 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:56.213 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:56.472 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:56.472 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:56.472 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:56.472 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:56.472 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:56.472 [72/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:56.732 [73/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:56.732 [74/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:56.732 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:56.732 [76/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:56.732 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:56.732 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:56.993 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:56.993 [80/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:56.993 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:56.993 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:56.993 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:56.993 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:56.993 [85/707] Linking static target lib/librte_ring.a 00:02:56.993 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:57.253 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.253 [88/707] Linking static target lib/librte_eal.a 00:02:57.253 [89/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.253 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:57.253 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:57.253 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:57.253 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:57.253 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:57.512 [95/707] Linking static target lib/librte_mempool.a 00:02:57.512 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:57.512 [97/707] Linking static target lib/librte_rcu.a 00:02:57.512 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:57.772 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:57.772 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:57.772 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:57.772 [102/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:57.772 [103/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:57.772 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.772 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.031 [106/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:58.031 [107/707] Linking static target lib/librte_net.a 00:02:58.031 [108/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.031 [109/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:58.031 [110/707] Linking static target lib/librte_mbuf.a 00:02:58.031 [111/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:58.031 [112/707] Linking static target lib/librte_meter.a 00:02:58.291 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.291 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.291 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.291 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.291 [117/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.291 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:58.551 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.811 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.811 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:59.071 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:59.071 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:59.071 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:59.071 [125/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:59.071 [126/707] Linking static target lib/librte_pci.a 00:02:59.071 [127/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.071 [128/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:59.071 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:59.330 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.330 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:59.330 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.330 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.330 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.330 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.330 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.330 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.330 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.330 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.330 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.589 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.589 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:59.589 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:59.589 [144/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:59.589 [145/707] Linking static target lib/librte_cmdline.a 00:02:59.848 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:59.848 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:59.848 [148/707] Linking static target lib/librte_metrics.a 00:02:59.848 [149/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:59.848 [150/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.107 [151/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.107 [152/707] Linking static target lib/librte_timer.a 00:03:00.107 [153/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.107 [154/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.367 [155/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:00.367 [156/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.627 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.627 [158/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:00.886 [159/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:00.886 [160/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:01.146 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:01.146 [162/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:01.146 [163/707] Linking static target lib/librte_bitratestats.a 00:03:01.146 [164/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:01.146 [165/707] Linking static target lib/librte_bbdev.a 00:03:01.406 [166/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:01.406 [167/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.406 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:01.665 [169/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:01.665 [170/707] Linking static target lib/librte_hash.a 00:03:01.665 [171/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:01.925 [172/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.925 [173/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:01.925 [174/707] Linking static target lib/acl/libavx2_tmp.a 00:03:01.925 [175/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:01.925 [176/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:01.925 [177/707] Linking static target lib/librte_ethdev.a 00:03:01.925 [178/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:02.184 [179/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:02.184 [180/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.184 [181/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:02.184 [182/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:02.445 [183/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:02.445 [184/707] Linking static target lib/librte_cfgfile.a 00:03:02.445 [185/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:02.705 [186/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:02.705 [187/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:02.705 [188/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:02.705 [189/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.705 [190/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:02.705 [191/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:02.705 [192/707] Linking static target lib/librte_compressdev.a 00:03:02.705 [193/707] Linking static target lib/librte_bpf.a 00:03:02.964 [194/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:02.964 [195/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:02.964 [196/707] Linking static target lib/librte_acl.a 00:03:03.224 [197/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.224 [198/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:03.224 [199/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:03.224 [200/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.224 [201/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:03.224 [202/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.224 [203/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:03.483 [204/707] Linking static target lib/librte_distributor.a 00:03:03.483 [205/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:03.483 [206/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:03.483 [207/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.743 [208/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.743 [209/707] Linking target lib/librte_eal.so.24.0 00:03:03.743 [210/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:03.743 [211/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:03.743 [212/707] Linking target lib/librte_ring.so.24.0 00:03:03.743 [213/707] Linking target lib/librte_meter.so.24.0 00:03:04.003 [214/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:04.003 [215/707] Linking target lib/librte_pci.so.24.0 00:03:04.003 [216/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:04.003 [217/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:04.003 [218/707] Linking target lib/librte_timer.so.24.0 00:03:04.003 [219/707] Linking target lib/librte_rcu.so.24.0 00:03:04.003 [220/707] Linking target lib/librte_mempool.so.24.0 00:03:04.003 [221/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:04.003 [222/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:04.264 [223/707] Linking target lib/librte_acl.so.24.0 00:03:04.264 [224/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:04.264 [225/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:04.264 [226/707] Linking target lib/librte_mbuf.so.24.0 00:03:04.264 [227/707] Linking target lib/librte_cfgfile.so.24.0 00:03:04.264 [228/707] Linking static target lib/librte_dmadev.a 00:03:04.264 [229/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:04.264 [230/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:04.264 [231/707] Linking static target lib/librte_efd.a 00:03:04.264 [232/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:04.264 [233/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:04.264 [234/707] Linking target lib/librte_net.so.24.0 00:03:04.524 [235/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:04.524 [236/707] Linking target lib/librte_bbdev.so.24.0 00:03:04.524 [237/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.524 [238/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:04.524 [239/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:04.524 [240/707] Linking target lib/librte_compressdev.so.24.0 00:03:04.524 [241/707] Linking static target lib/librte_cryptodev.a 00:03:04.524 [242/707] Linking target lib/librte_distributor.so.24.0 00:03:04.524 [243/707] Linking target lib/librte_cmdline.so.24.0 00:03:04.524 [244/707] Linking target lib/librte_hash.so.24.0 00:03:04.524 [245/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.524 [246/707] Linking target lib/librte_dmadev.so.24.0 00:03:04.784 [247/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:04.784 [248/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:04.784 [249/707] Linking target lib/librte_efd.so.24.0 00:03:04.784 [250/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:04.784 [251/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:05.044 [252/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:05.044 [253/707] Linking static target lib/librte_dispatcher.a 00:03:05.304 [254/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:05.304 [255/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:05.304 [256/707] Linking static target lib/librte_gpudev.a 00:03:05.304 [257/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:05.304 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:05.304 [259/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:05.564 [260/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.824 [261/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.824 [262/707] Linking target lib/librte_cryptodev.so.24.0 00:03:05.824 [263/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:05.824 [264/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:05.824 [265/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:05.824 [266/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:05.824 [267/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:05.824 [268/707] Linking static target lib/librte_gro.a 00:03:05.824 [269/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:05.824 [270/707] Linking static target lib/librte_eventdev.a 00:03:05.824 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:05.824 [272/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:06.083 [273/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.083 [274/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.083 [275/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:06.083 [276/707] Linking target lib/librte_gpudev.so.24.0 00:03:06.083 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:06.343 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:06.343 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:06.343 [280/707] Linking static target lib/librte_gso.a 00:03:06.603 [281/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:06.603 [282/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.603 [283/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:06.603 [284/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:06.603 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:06.603 [286/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:06.603 [287/707] Linking static target lib/librte_jobstats.a 00:03:06.603 [288/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:06.871 [289/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:06.871 [290/707] Linking static target lib/librte_ip_frag.a 00:03:06.871 [291/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.871 [292/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.871 [293/707] Linking target lib/librte_jobstats.so.24.0 00:03:07.139 [294/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:07.139 [295/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:07.139 [296/707] Linking target lib/librte_ethdev.so.24.0 00:03:07.139 [297/707] Linking static target lib/librte_latencystats.a 00:03:07.139 [298/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:07.139 [299/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:07.139 [300/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.139 [301/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:07.139 [302/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:07.139 [303/707] Linking target lib/librte_metrics.so.24.0 00:03:07.139 [304/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:07.139 [305/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.139 [306/707] Linking target lib/librte_bpf.so.24.0 00:03:07.428 [307/707] Linking target lib/librte_gro.so.24.0 00:03:07.428 [308/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:07.428 [309/707] Linking target lib/librte_gso.so.24.0 00:03:07.428 [310/707] Linking target lib/librte_bitratestats.so.24.0 00:03:07.428 [311/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:07.428 [312/707] Linking target lib/librte_ip_frag.so.24.0 00:03:07.428 [313/707] Linking target lib/librte_latencystats.so.24.0 00:03:07.428 [314/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:07.428 [315/707] Linking static target lib/librte_lpm.a 00:03:07.428 [316/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:07.428 [317/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:07.688 [318/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:07.688 [319/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:07.688 [320/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:07.689 [321/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:07.689 [322/707] Linking static target lib/librte_pcapng.a 00:03:07.689 [323/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.689 [324/707] Linking target lib/librte_lpm.so.24.0 00:03:07.948 [325/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:07.948 [326/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:07.948 [327/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:07.948 [328/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:07.948 [329/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:07.948 [330/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.948 [331/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:07.948 [332/707] Linking target lib/librte_pcapng.so.24.0 00:03:08.207 [333/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.207 [334/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:08.207 [335/707] Linking target lib/librte_eventdev.so.24.0 00:03:08.207 [336/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:08.207 [337/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:08.207 [338/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:08.467 [339/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:08.467 [340/707] Linking target lib/librte_dispatcher.so.24.0 00:03:08.467 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:08.467 [342/707] Linking static target lib/librte_power.a 00:03:08.467 [343/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:08.467 [344/707] Linking static target lib/librte_rawdev.a 00:03:08.467 [345/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:08.467 [346/707] Linking static target lib/librte_regexdev.a 00:03:08.467 [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:08.467 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:08.726 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:08.727 [350/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:08.727 [351/707] Linking static target lib/librte_member.a 00:03:08.727 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:08.727 [353/707] Linking static target lib/librte_mldev.a 00:03:08.987 [354/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:08.987 [355/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.987 [356/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:08.987 [357/707] Linking target lib/librte_rawdev.so.24.0 00:03:08.987 [358/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.987 [359/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.987 [360/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:08.987 [361/707] Linking target lib/librte_member.so.24.0 00:03:08.987 [362/707] Linking target lib/librte_power.so.24.0 00:03:08.987 [363/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:09.247 [364/707] Linking static target lib/librte_reorder.a 00:03:09.247 [365/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:09.247 [366/707] Linking static target lib/librte_rib.a 00:03:09.247 [367/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.247 [368/707] Linking target lib/librte_regexdev.so.24.0 00:03:09.247 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:09.506 [370/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:09.506 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:09.506 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:09.506 [373/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:09.506 [374/707] Linking static target lib/librte_stack.a 00:03:09.506 [375/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.506 [376/707] Linking target lib/librte_reorder.so.24.0 00:03:09.506 [377/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:09.506 [378/707] Linking static target lib/librte_security.a 00:03:09.506 [379/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.765 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.765 [381/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:09.765 [382/707] Linking target lib/librte_rib.so.24.0 00:03:09.765 [383/707] Linking target lib/librte_stack.so.24.0 00:03:09.765 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:10.024 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:10.024 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:10.024 [387/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:10.024 [388/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.024 [389/707] Linking static target lib/librte_sched.a 00:03:10.024 [390/707] Linking target lib/librte_security.so.24.0 00:03:10.024 [391/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.024 [392/707] Linking target lib/librte_mldev.so.24.0 00:03:10.024 [393/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:10.283 [394/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:10.283 [395/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.542 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:10.542 [397/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:10.542 [398/707] Linking target lib/librte_sched.so.24.0 00:03:10.542 [399/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:10.543 [400/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:10.802 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:10.802 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:10.802 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:11.061 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:11.061 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:11.061 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:11.061 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:11.320 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:11.320 [409/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:11.579 [410/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:11.579 [411/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:11.579 [412/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:11.579 [413/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:11.839 [414/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:11.839 [415/707] Linking static target lib/librte_ipsec.a 00:03:12.107 [416/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:12.107 [417/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:12.107 [418/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.108 [419/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:12.108 [420/707] Linking target lib/librte_ipsec.so.24.0 00:03:12.383 [421/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:12.383 [422/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:12.383 [423/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:12.383 [424/707] Linking static target lib/librte_fib.a 00:03:12.383 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:12.642 [426/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:12.642 [427/707] Linking static target lib/librte_pdcp.a 00:03:12.642 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:12.642 [429/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.642 [430/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:12.642 [431/707] Linking target lib/librte_fib.so.24.0 00:03:12.642 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:12.902 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.902 [434/707] Linking target lib/librte_pdcp.so.24.0 00:03:13.162 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:13.162 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:13.162 [437/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:13.162 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:13.421 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:13.421 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:13.680 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:13.680 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:13.680 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:13.680 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:13.940 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:13.940 [446/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:13.940 [447/707] Linking static target lib/librte_port.a 00:03:13.940 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:13.940 [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:13.940 [450/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:13.940 [451/707] Linking static target lib/librte_pdump.a 00:03:14.199 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:14.199 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:14.459 [454/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.459 [455/707] Linking target lib/librte_pdump.so.24.0 00:03:14.459 [456/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.718 [457/707] Linking target lib/librte_port.so.24.0 00:03:14.718 [458/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:14.718 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:14.718 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:14.718 [461/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:14.977 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:14.977 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:14.977 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:15.236 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:15.236 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:15.236 [467/707] Linking static target lib/librte_table.a 00:03:15.236 [468/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:15.496 [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:15.496 [470/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:15.754 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:15.754 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.754 [473/707] Linking target lib/librte_table.so.24.0 00:03:16.013 [474/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:16.013 [475/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:16.013 [476/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:16.013 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:16.273 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:16.273 [479/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:16.273 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:16.533 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:16.533 [482/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:16.533 [483/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:16.793 [484/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:16.793 [485/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:16.793 [486/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:17.052 [487/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:17.052 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:17.052 [489/707] Linking static target lib/librte_graph.a 00:03:17.052 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:17.311 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:17.570 [492/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:17.570 [493/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.570 [494/707] Linking target lib/librte_graph.so.24.0 00:03:17.570 [495/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:17.829 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:17.829 [497/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:17.829 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:17.829 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:17.830 [500/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:17.830 [501/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:17.830 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:17.830 [503/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:18.089 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:18.350 [505/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.350 [506/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:18.350 [507/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:18.350 [508/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:18.350 [509/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:18.350 [510/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:18.350 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:18.350 [512/707] Linking static target lib/librte_node.a 00:03:18.610 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.610 [514/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:18.870 [515/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:18.870 [516/707] Linking target lib/librte_node.so.24.0 00:03:18.870 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:18.870 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:18.870 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:19.129 [520/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.129 [521/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:19.129 [522/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.129 [523/707] Linking static target drivers/librte_bus_pci.a 00:03:19.129 [524/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:19.129 [525/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.129 [526/707] Linking static target drivers/librte_bus_vdev.a 00:03:19.129 [527/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:19.129 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:19.129 [529/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.389 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.389 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:03:19.389 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:19.389 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:19.389 [534/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:19.389 [535/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:19.648 [536/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.648 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:19.648 [538/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.648 [539/707] Linking static target drivers/librte_mempool_ring.a 00:03:19.648 [540/707] Linking target drivers/librte_bus_pci.so.24.0 00:03:19.648 [541/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.649 [542/707] Linking target drivers/librte_mempool_ring.so.24.0 00:03:19.649 [543/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:19.649 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:19.908 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:20.166 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:20.166 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:20.761 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:20.761 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:21.019 [550/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:21.019 [551/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:21.019 [552/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:21.019 [553/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:21.276 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:21.276 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:21.534 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:21.534 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:21.534 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:21.793 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:21.793 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:22.051 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:22.051 [562/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:22.051 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:22.309 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:22.568 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:22.568 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:22.568 [567/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:22.568 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:22.825 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:22.825 [570/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:22.825 [571/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:22.825 [572/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:22.825 [573/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:23.083 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:23.083 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:23.340 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:23.340 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:23.340 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:23.597 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:23.597 [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:23.597 [581/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:23.855 [582/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:23.855 [583/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:23.855 [584/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:23.855 [585/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:24.113 [586/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:24.113 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:24.113 [588/707] Linking static target drivers/librte_net_i40e.a 00:03:24.113 [589/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:24.113 [590/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:24.372 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:24.372 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:24.630 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:24.630 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:24.630 [595/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.889 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:24.889 [597/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:24.889 [598/707] Linking target drivers/librte_net_i40e.so.24.0 00:03:24.889 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:24.889 [600/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.889 [601/707] Linking static target lib/librte_vhost.a 00:03:25.148 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:25.148 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:25.407 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:25.407 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:25.407 [606/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:25.407 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:25.667 [608/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:25.667 [609/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:25.667 [610/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:25.667 [611/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:25.925 [612/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:25.925 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:25.925 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:25.925 [615/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.184 [616/707] Linking target lib/librte_vhost.so.24.0 00:03:26.184 [617/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:26.184 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:26.184 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:26.444 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:27.013 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:27.013 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:27.013 [623/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:27.273 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:27.273 [625/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:27.273 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:27.273 [627/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:27.273 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:27.533 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:27.533 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:27.533 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:27.533 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:27.533 [633/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:27.792 [634/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:27.792 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:27.792 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:27.792 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:28.051 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:28.051 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:28.051 [640/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:28.051 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:28.310 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:28.310 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:28.310 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:28.569 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:28.569 [646/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:28.569 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:28.569 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:28.569 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:28.829 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:28.829 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:29.099 [652/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:29.099 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:29.099 [654/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:29.099 [655/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:29.376 [656/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:29.376 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:29.376 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:29.634 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:29.894 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:29.894 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:29.894 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:30.152 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:30.152 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:30.411 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:30.411 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:30.411 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:30.411 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:30.670 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:30.670 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:30.929 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:30.929 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:31.187 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:31.187 [674/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:31.446 [675/707] Linking static target lib/librte_pipeline.a 00:03:31.446 [676/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:31.704 [677/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:31.704 [678/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:31.704 [679/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:31.704 [680/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:31.704 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:31.964 [682/707] Linking target app/dpdk-dumpcap 00:03:31.964 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:31.964 [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:31.964 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:31.964 [686/707] Linking target app/dpdk-graph 00:03:31.964 [687/707] Linking target app/dpdk-pdump 00:03:31.964 [688/707] Linking target app/dpdk-proc-info 00:03:32.223 [689/707] Linking target app/dpdk-test-bbdev 00:03:32.223 [690/707] Linking target app/dpdk-test-acl 00:03:32.483 [691/707] Linking target app/dpdk-test-cmdline 00:03:32.483 [692/707] Linking target app/dpdk-test-compress-perf 00:03:32.483 [693/707] Linking target app/dpdk-test-crypto-perf 00:03:32.483 [694/707] Linking target app/dpdk-test-dma-perf 00:03:32.483 [695/707] Linking target app/dpdk-test-fib 00:03:32.483 [696/707] Linking target app/dpdk-test-flow-perf 00:03:32.483 [697/707] Linking target app/dpdk-test-eventdev 00:03:32.742 [698/707] Linking target app/dpdk-test-gpudev 00:03:32.742 [699/707] Linking target app/dpdk-test-mldev 00:03:32.742 [700/707] Linking target app/dpdk-test-pipeline 00:03:33.001 [701/707] Linking target app/dpdk-test-regex 00:03:33.001 [702/707] Linking target app/dpdk-testpmd 00:03:33.001 [703/707] Linking target app/dpdk-test-sad 00:03:33.571 [704/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:34.139 [705/707] Linking target app/dpdk-test-security-perf 00:03:36.675 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.675 [707/707] Linking target lib/librte_pipeline.so.24.0 00:03:36.675 15:03:35 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:36.675 15:03:35 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:36.675 15:03:35 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:36.934 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:36.934 [0/1] Installing files. 00:03:37.195 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:37.195 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:37.195 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:37.195 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.195 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.195 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.195 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.195 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.195 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.195 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.195 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:37.197 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.198 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.199 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:37.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:37.200 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.200 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.201 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.772 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.772 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.772 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.772 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:37.772 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.772 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:37.772 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.772 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:37.772 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.772 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:37.773 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.775 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:37.776 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:37.776 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:37.776 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:37.776 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:37.776 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:37.776 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:37.776 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:37.776 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:37.776 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:37.776 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:37.776 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:37.776 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:37.776 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:37.776 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:37.776 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:37.776 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:37.776 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:37.776 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:37.776 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:37.776 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:37.776 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:37.776 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:37.776 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:37.776 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:37.776 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:37.776 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:37.776 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:37.776 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:37.776 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:37.776 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:37.776 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:37.776 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:37.776 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:37.776 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:37.776 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:37.776 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:37.776 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:37.776 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:37.776 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:37.776 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:37.776 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:37.776 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:37.776 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:37.776 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:37.776 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:37.776 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:37.776 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:37.776 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:37.777 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:37.777 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:37.777 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:37.777 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:37.777 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:37.777 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:37.777 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:37.777 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:37.777 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:37.777 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:37.777 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:37.777 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:37.777 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:37.777 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:37.777 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:37.777 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:37.777 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:37.777 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:37.777 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:37.777 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:37.777 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:37.777 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:37.777 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:37.777 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:37.777 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:37.777 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:37.777 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:37.777 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:37.777 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:37.777 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:37.777 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:37.777 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:37.777 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:37.777 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:37.777 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:37.777 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:37.777 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:37.777 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:37.777 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:37.777 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:37.777 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:37.777 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:37.777 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:37.777 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:37.777 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:37.777 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:37.777 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:37.777 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:37.777 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:37.777 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:37.777 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:37.777 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:37.777 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:37.777 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:37.777 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:37.777 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:37.777 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:37.777 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:37.777 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:37.777 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:37.777 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:37.777 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:37.777 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:37.777 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:37.777 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:37.777 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:37.777 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:37.777 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:37.777 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:37.777 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:37.777 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:37.777 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:37.777 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:37.777 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:37.777 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:37.777 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:37.777 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:37.777 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:37.777 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:37.777 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:37.777 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:37.777 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:37.777 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:37.777 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:37.777 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:37.777 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:37.777 15:03:36 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:37.777 15:03:36 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:37.777 00:03:37.777 real 0m51.381s 00:03:37.777 user 5m16.664s 00:03:37.777 sys 1m12.434s 00:03:37.777 ************************************ 00:03:37.777 END TEST build_native_dpdk 00:03:37.777 ************************************ 00:03:37.777 15:03:36 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:37.777 15:03:36 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:37.777 15:03:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:37.777 15:03:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:37.777 15:03:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:37.778 15:03:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:37.778 15:03:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:37.778 15:03:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:37.778 15:03:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:37.778 15:03:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:38.037 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:38.295 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.295 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:38.295 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:38.554 Using 'verbs' RDMA provider 00:03:54.826 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:13.009 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:13.009 Creating mk/config.mk...done. 00:04:13.009 Creating mk/cc.flags.mk...done. 00:04:13.009 Type 'make' to build. 00:04:13.009 15:04:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:13.009 15:04:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:13.009 15:04:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:13.009 15:04:10 -- common/autotest_common.sh@10 -- $ set +x 00:04:13.009 ************************************ 00:04:13.009 START TEST make 00:04:13.009 ************************************ 00:04:13.009 15:04:10 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:13.009 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:13.009 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:13.009 meson setup builddir \ 00:04:13.009 -Dwith-libaio=enabled \ 00:04:13.009 -Dwith-liburing=enabled \ 00:04:13.009 -Dwith-libvfn=disabled \ 00:04:13.009 -Dwith-spdk=false && \ 00:04:13.009 meson compile -C builddir && \ 00:04:13.009 cd -) 00:04:13.009 make[1]: Nothing to be done for 'all'. 00:04:14.910 The Meson build system 00:04:14.910 Version: 1.5.0 00:04:14.910 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:14.910 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:14.911 Build type: native build 00:04:14.911 Project name: xnvme 00:04:14.911 Project version: 0.7.3 00:04:14.911 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:14.911 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:14.911 Host machine cpu family: x86_64 00:04:14.911 Host machine cpu: x86_64 00:04:14.911 Message: host_machine.system: linux 00:04:14.911 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:14.911 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:14.911 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:14.911 Run-time dependency threads found: YES 00:04:14.911 Has header "setupapi.h" : NO 00:04:14.911 Has header "linux/blkzoned.h" : YES 00:04:14.911 Has header "linux/blkzoned.h" : YES (cached) 00:04:14.911 Has header "libaio.h" : YES 00:04:14.911 Library aio found: YES 00:04:14.911 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:14.911 Run-time dependency liburing found: YES 2.2 00:04:14.911 Dependency libvfn skipped: feature with-libvfn disabled 00:04:14.911 Run-time dependency appleframeworks found: NO (tried framework) 00:04:14.911 Run-time dependency appleframeworks found: NO (tried framework) 00:04:14.911 Configuring xnvme_config.h using configuration 00:04:14.911 Configuring xnvme.spec using configuration 00:04:14.911 Run-time dependency bash-completion found: YES 2.11 00:04:14.911 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:14.911 Program cp found: YES (/usr/bin/cp) 00:04:14.911 Has header "winsock2.h" : NO 00:04:14.911 Has header "dbghelp.h" : NO 00:04:14.911 Library rpcrt4 found: NO 00:04:14.911 Library rt found: YES 00:04:14.911 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:14.911 Found CMake: /usr/bin/cmake (3.27.7) 00:04:14.911 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:14.911 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:14.911 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:14.911 Build targets in project: 32 00:04:14.911 00:04:14.911 xnvme 0.7.3 00:04:14.911 00:04:14.911 User defined options 00:04:14.911 with-libaio : enabled 00:04:14.911 with-liburing: enabled 00:04:14.911 with-libvfn : disabled 00:04:14.911 with-spdk : false 00:04:14.911 00:04:14.911 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:15.169 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:15.169 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:15.169 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:15.169 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:15.169 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:15.169 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:15.169 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:15.169 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:15.169 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:15.169 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:15.169 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:15.169 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:15.169 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:15.169 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:15.428 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:15.428 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:15.428 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:15.428 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:15.428 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:15.428 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:15.428 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:15.428 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:15.428 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:15.428 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:15.428 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:15.428 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:15.428 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:15.428 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:15.428 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:15.428 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:15.428 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:15.428 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:15.428 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:15.428 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:15.428 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:15.428 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:15.428 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:15.428 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:15.687 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:15.687 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:15.687 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:15.687 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:15.687 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:15.687 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:15.687 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:15.687 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:15.687 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:15.687 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:15.687 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:15.687 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:15.687 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:15.687 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:15.687 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:15.687 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:15.687 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:15.687 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:15.687 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:15.687 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:15.687 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:15.687 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:15.687 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:15.687 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:15.687 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:15.687 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:15.687 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:15.687 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:15.946 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:15.946 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:15.946 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:15.946 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:15.946 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:15.946 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:15.946 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:15.946 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:15.946 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:15.946 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:15.946 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:15.946 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:15.946 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:15.946 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:15.946 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:15.946 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:15.946 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:16.205 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:16.205 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:16.205 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:16.205 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:16.205 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:16.205 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:16.205 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:16.205 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:16.205 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:16.205 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:16.205 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:16.205 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:16.205 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:16.205 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:16.205 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:16.205 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:16.205 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:16.205 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:16.205 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:16.205 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:16.205 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:16.205 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:16.205 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:16.464 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:16.464 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:16.464 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:16.464 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:16.464 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:16.464 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:16.464 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:16.464 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:16.464 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:16.464 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:16.464 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:16.464 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:16.464 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:16.464 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:16.464 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:16.464 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:16.464 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:16.464 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:16.464 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:16.464 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:16.464 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:16.464 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:16.464 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:16.464 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:16.464 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:16.464 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:16.464 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:16.723 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:16.723 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:16.723 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:16.723 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:16.723 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:16.723 [138/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:16.723 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:16.723 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:16.723 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:16.723 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:16.723 [143/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:16.723 [144/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:16.723 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:16.723 [146/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:16.723 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:16.982 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:16.982 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:16.982 [150/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:16.982 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:16.982 [152/203] Linking target lib/libxnvme.so 00:04:16.982 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:16.982 [154/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:16.982 [155/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:16.982 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:16.982 [157/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:16.982 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:16.982 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:16.982 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:16.982 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:16.982 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:17.240 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:17.240 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:17.240 [165/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:17.240 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:17.240 [167/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:17.240 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:17.240 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:17.240 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:17.240 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:17.240 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:17.241 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:17.241 [174/203] Linking static target lib/libxnvme.a 00:04:17.499 [175/203] Linking target tests/xnvme_tests_cli 00:04:17.499 [176/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:17.499 [177/203] Linking target tests/xnvme_tests_async_intf 00:04:17.499 [178/203] Linking target tests/xnvme_tests_buf 00:04:17.499 [179/203] Linking target tests/xnvme_tests_lblk 00:04:17.499 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:04:17.499 [181/203] Linking target tests/xnvme_tests_scc 00:04:17.499 [182/203] Linking target tests/xnvme_tests_ioworker 00:04:17.499 [183/203] Linking target tests/xnvme_tests_enum 00:04:17.499 [184/203] Linking target tests/xnvme_tests_znd_append 00:04:17.499 [185/203] Linking target tests/xnvme_tests_znd_state 00:04:17.499 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:17.499 [187/203] Linking target tests/xnvme_tests_kvs 00:04:17.499 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:17.499 [189/203] Linking target tests/xnvme_tests_map 00:04:17.499 [190/203] Linking target examples/xnvme_enum 00:04:17.499 [191/203] Linking target tools/lblk 00:04:17.499 [192/203] Linking target tools/xdd 00:04:17.499 [193/203] Linking target tools/zoned 00:04:17.499 [194/203] Linking target examples/xnvme_dev 00:04:17.499 [195/203] Linking target tools/xnvme 00:04:17.499 [196/203] Linking target tools/xnvme_file 00:04:17.499 [197/203] Linking target tools/kvs 00:04:17.499 [198/203] Linking target examples/xnvme_hello 00:04:17.499 [199/203] Linking target examples/xnvme_single_async 00:04:17.499 [200/203] Linking target examples/xnvme_io_async 00:04:17.499 [201/203] Linking target examples/xnvme_single_sync 00:04:17.499 [202/203] Linking target examples/zoned_io_async 00:04:17.499 [203/203] Linking target examples/zoned_io_sync 00:04:17.499 INFO: autodetecting backend as ninja 00:04:17.500 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:17.500 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:04.176 CC lib/ut/ut.o 00:05:04.176 CC lib/ut_mock/mock.o 00:05:04.176 CC lib/log/log.o 00:05:04.176 CC lib/log/log_flags.o 00:05:04.176 CC lib/log/log_deprecated.o 00:05:04.176 LIB libspdk_ut_mock.a 00:05:04.176 LIB libspdk_ut.a 00:05:04.176 SO libspdk_ut_mock.so.6.0 00:05:04.176 SO libspdk_ut.so.2.0 00:05:04.176 LIB libspdk_log.a 00:05:04.176 SO libspdk_log.so.7.0 00:05:04.176 SYMLINK libspdk_ut_mock.so 00:05:04.176 SYMLINK libspdk_ut.so 00:05:04.176 SYMLINK libspdk_log.so 00:05:04.176 CC lib/dma/dma.o 00:05:04.176 CC lib/util/base64.o 00:05:04.176 CC lib/util/bit_array.o 00:05:04.176 CC lib/util/cpuset.o 00:05:04.176 CC lib/util/crc32.o 00:05:04.176 CC lib/util/crc32c.o 00:05:04.176 CC lib/util/crc16.o 00:05:04.176 CC lib/ioat/ioat.o 00:05:04.176 CXX lib/trace_parser/trace.o 00:05:04.176 CC lib/vfio_user/host/vfio_user_pci.o 00:05:04.176 CC lib/util/crc32_ieee.o 00:05:04.176 CC lib/util/crc64.o 00:05:04.176 CC lib/util/dif.o 00:05:04.176 CC lib/vfio_user/host/vfio_user.o 00:05:04.176 CC lib/util/fd.o 00:05:04.176 LIB libspdk_dma.a 00:05:04.176 CC lib/util/fd_group.o 00:05:04.176 SO libspdk_dma.so.5.0 00:05:04.176 CC lib/util/file.o 00:05:04.176 CC lib/util/hexlify.o 00:05:04.176 SYMLINK libspdk_dma.so 00:05:04.176 CC lib/util/iov.o 00:05:04.176 CC lib/util/math.o 00:05:04.176 LIB libspdk_ioat.a 00:05:04.176 CC lib/util/net.o 00:05:04.176 SO libspdk_ioat.so.7.0 00:05:04.176 LIB libspdk_vfio_user.a 00:05:04.176 CC lib/util/pipe.o 00:05:04.176 SYMLINK libspdk_ioat.so 00:05:04.176 CC lib/util/strerror_tls.o 00:05:04.176 SO libspdk_vfio_user.so.5.0 00:05:04.176 CC lib/util/string.o 00:05:04.176 CC lib/util/uuid.o 00:05:04.176 CC lib/util/xor.o 00:05:04.176 CC lib/util/zipf.o 00:05:04.176 SYMLINK libspdk_vfio_user.so 00:05:04.176 CC lib/util/md5.o 00:05:04.176 LIB libspdk_util.a 00:05:04.176 SO libspdk_util.so.10.0 00:05:04.176 LIB libspdk_trace_parser.a 00:05:04.176 SYMLINK libspdk_util.so 00:05:04.176 SO libspdk_trace_parser.so.6.0 00:05:04.176 SYMLINK libspdk_trace_parser.so 00:05:04.176 CC lib/idxd/idxd_user.o 00:05:04.176 CC lib/idxd/idxd.o 00:05:04.176 CC lib/idxd/idxd_kernel.o 00:05:04.176 CC lib/json/json_parse.o 00:05:04.176 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:04.176 CC lib/rdma_provider/common.o 00:05:04.176 CC lib/conf/conf.o 00:05:04.176 CC lib/env_dpdk/env.o 00:05:04.176 CC lib/rdma_utils/rdma_utils.o 00:05:04.176 CC lib/vmd/vmd.o 00:05:04.176 CC lib/vmd/led.o 00:05:04.176 CC lib/json/json_util.o 00:05:04.177 LIB libspdk_rdma_provider.a 00:05:04.177 LIB libspdk_conf.a 00:05:04.177 CC lib/json/json_write.o 00:05:04.177 CC lib/env_dpdk/memory.o 00:05:04.177 SO libspdk_rdma_provider.so.6.0 00:05:04.177 SO libspdk_conf.so.6.0 00:05:04.177 LIB libspdk_rdma_utils.a 00:05:04.177 SYMLINK libspdk_rdma_provider.so 00:05:04.177 SO libspdk_rdma_utils.so.1.0 00:05:04.177 CC lib/env_dpdk/pci.o 00:05:04.177 CC lib/env_dpdk/init.o 00:05:04.177 SYMLINK libspdk_conf.so 00:05:04.177 CC lib/env_dpdk/threads.o 00:05:04.177 SYMLINK libspdk_rdma_utils.so 00:05:04.177 CC lib/env_dpdk/pci_ioat.o 00:05:04.177 CC lib/env_dpdk/pci_virtio.o 00:05:04.177 CC lib/env_dpdk/pci_vmd.o 00:05:04.177 CC lib/env_dpdk/pci_idxd.o 00:05:04.177 LIB libspdk_json.a 00:05:04.177 CC lib/env_dpdk/pci_event.o 00:05:04.177 SO libspdk_json.so.6.0 00:05:04.177 CC lib/env_dpdk/sigbus_handler.o 00:05:04.177 LIB libspdk_idxd.a 00:05:04.177 SYMLINK libspdk_json.so 00:05:04.177 CC lib/env_dpdk/pci_dpdk.o 00:05:04.177 SO libspdk_idxd.so.12.1 00:05:04.177 LIB libspdk_vmd.a 00:05:04.177 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:04.177 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:04.177 SO libspdk_vmd.so.6.0 00:05:04.177 SYMLINK libspdk_idxd.so 00:05:04.177 SYMLINK libspdk_vmd.so 00:05:04.177 CC lib/jsonrpc/jsonrpc_server.o 00:05:04.177 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:04.177 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:04.177 CC lib/jsonrpc/jsonrpc_client.o 00:05:04.177 LIB libspdk_jsonrpc.a 00:05:04.177 SO libspdk_jsonrpc.so.6.0 00:05:04.177 SYMLINK libspdk_jsonrpc.so 00:05:04.177 CC lib/rpc/rpc.o 00:05:04.177 LIB libspdk_env_dpdk.a 00:05:04.177 SO libspdk_env_dpdk.so.15.0 00:05:04.177 LIB libspdk_rpc.a 00:05:04.177 SO libspdk_rpc.so.6.0 00:05:04.436 SYMLINK libspdk_env_dpdk.so 00:05:04.436 SYMLINK libspdk_rpc.so 00:05:04.696 CC lib/trace/trace_flags.o 00:05:04.696 CC lib/trace/trace.o 00:05:04.696 CC lib/trace/trace_rpc.o 00:05:04.696 CC lib/notify/notify.o 00:05:04.696 CC lib/notify/notify_rpc.o 00:05:04.696 CC lib/keyring/keyring.o 00:05:04.696 CC lib/keyring/keyring_rpc.o 00:05:04.954 LIB libspdk_notify.a 00:05:04.954 SO libspdk_notify.so.6.0 00:05:04.954 LIB libspdk_keyring.a 00:05:04.954 SYMLINK libspdk_notify.so 00:05:05.213 SO libspdk_keyring.so.2.0 00:05:05.213 LIB libspdk_trace.a 00:05:05.213 SO libspdk_trace.so.11.0 00:05:05.213 SYMLINK libspdk_keyring.so 00:05:05.213 SYMLINK libspdk_trace.so 00:05:05.779 CC lib/thread/thread.o 00:05:05.779 CC lib/sock/sock.o 00:05:05.779 CC lib/sock/sock_rpc.o 00:05:05.779 CC lib/thread/iobuf.o 00:05:06.346 LIB libspdk_sock.a 00:05:06.346 SO libspdk_sock.so.10.0 00:05:06.346 SYMLINK libspdk_sock.so 00:05:06.914 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:06.914 CC lib/nvme/nvme_ctrlr.o 00:05:06.914 CC lib/nvme/nvme_fabric.o 00:05:06.914 CC lib/nvme/nvme_ns_cmd.o 00:05:06.914 CC lib/nvme/nvme_ns.o 00:05:06.914 CC lib/nvme/nvme_pcie_common.o 00:05:06.914 CC lib/nvme/nvme.o 00:05:06.914 CC lib/nvme/nvme_qpair.o 00:05:06.914 CC lib/nvme/nvme_pcie.o 00:05:07.482 LIB libspdk_thread.a 00:05:07.482 SO libspdk_thread.so.10.1 00:05:07.482 CC lib/nvme/nvme_quirks.o 00:05:07.482 CC lib/nvme/nvme_transport.o 00:05:07.482 SYMLINK libspdk_thread.so 00:05:07.482 CC lib/nvme/nvme_discovery.o 00:05:07.482 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:07.742 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:07.742 CC lib/nvme/nvme_tcp.o 00:05:07.742 CC lib/nvme/nvme_opal.o 00:05:07.742 CC lib/nvme/nvme_io_msg.o 00:05:08.000 CC lib/nvme/nvme_poll_group.o 00:05:08.000 CC lib/nvme/nvme_zns.o 00:05:08.000 CC lib/nvme/nvme_stubs.o 00:05:08.000 CC lib/nvme/nvme_auth.o 00:05:08.258 CC lib/nvme/nvme_cuse.o 00:05:08.258 CC lib/nvme/nvme_rdma.o 00:05:08.516 CC lib/accel/accel.o 00:05:08.516 CC lib/accel/accel_rpc.o 00:05:08.516 CC lib/blob/blobstore.o 00:05:08.516 CC lib/blob/request.o 00:05:08.774 CC lib/blob/zeroes.o 00:05:08.774 CC lib/accel/accel_sw.o 00:05:08.774 CC lib/blob/blob_bs_dev.o 00:05:09.339 CC lib/init/json_config.o 00:05:09.339 CC lib/init/subsystem.o 00:05:09.339 CC lib/init/subsystem_rpc.o 00:05:09.339 CC lib/virtio/virtio.o 00:05:09.339 CC lib/virtio/virtio_vhost_user.o 00:05:09.339 CC lib/fsdev/fsdev.o 00:05:09.339 CC lib/init/rpc.o 00:05:09.339 CC lib/virtio/virtio_vfio_user.o 00:05:09.339 CC lib/fsdev/fsdev_io.o 00:05:09.598 CC lib/fsdev/fsdev_rpc.o 00:05:09.598 LIB libspdk_init.a 00:05:09.598 CC lib/virtio/virtio_pci.o 00:05:09.598 SO libspdk_init.so.6.0 00:05:09.598 SYMLINK libspdk_init.so 00:05:09.856 LIB libspdk_accel.a 00:05:09.856 SO libspdk_accel.so.16.0 00:05:09.856 LIB libspdk_nvme.a 00:05:09.856 SYMLINK libspdk_accel.so 00:05:09.856 LIB libspdk_virtio.a 00:05:10.121 CC lib/event/app_rpc.o 00:05:10.121 CC lib/event/app.o 00:05:10.121 CC lib/event/reactor.o 00:05:10.121 CC lib/event/log_rpc.o 00:05:10.121 CC lib/event/scheduler_static.o 00:05:10.121 SO libspdk_virtio.so.7.0 00:05:10.121 LIB libspdk_fsdev.a 00:05:10.121 SO libspdk_fsdev.so.1.0 00:05:10.121 SO libspdk_nvme.so.14.0 00:05:10.121 SYMLINK libspdk_virtio.so 00:05:10.121 SYMLINK libspdk_fsdev.so 00:05:10.121 CC lib/bdev/bdev.o 00:05:10.121 CC lib/bdev/bdev_rpc.o 00:05:10.408 CC lib/bdev/bdev_zone.o 00:05:10.408 CC lib/bdev/part.o 00:05:10.408 CC lib/bdev/scsi_nvme.o 00:05:10.408 SYMLINK libspdk_nvme.so 00:05:10.408 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:10.674 LIB libspdk_event.a 00:05:10.674 SO libspdk_event.so.14.0 00:05:10.674 SYMLINK libspdk_event.so 00:05:11.242 LIB libspdk_fuse_dispatcher.a 00:05:11.242 SO libspdk_fuse_dispatcher.so.1.0 00:05:11.500 SYMLINK libspdk_fuse_dispatcher.so 00:05:12.436 LIB libspdk_blob.a 00:05:12.694 SO libspdk_blob.so.11.0 00:05:12.695 SYMLINK libspdk_blob.so 00:05:13.263 CC lib/blobfs/blobfs.o 00:05:13.263 CC lib/blobfs/tree.o 00:05:13.263 CC lib/lvol/lvol.o 00:05:13.522 LIB libspdk_bdev.a 00:05:13.522 SO libspdk_bdev.so.16.0 00:05:13.781 SYMLINK libspdk_bdev.so 00:05:14.038 CC lib/nbd/nbd_rpc.o 00:05:14.038 CC lib/nbd/nbd.o 00:05:14.038 CC lib/scsi/dev.o 00:05:14.038 CC lib/scsi/lun.o 00:05:14.038 CC lib/scsi/port.o 00:05:14.038 CC lib/ftl/ftl_core.o 00:05:14.038 CC lib/nvmf/ctrlr.o 00:05:14.038 CC lib/ublk/ublk.o 00:05:14.330 CC lib/ublk/ublk_rpc.o 00:05:14.330 LIB libspdk_blobfs.a 00:05:14.330 CC lib/ftl/ftl_init.o 00:05:14.330 SO libspdk_blobfs.so.10.0 00:05:14.330 CC lib/scsi/scsi.o 00:05:14.330 SYMLINK libspdk_blobfs.so 00:05:14.330 CC lib/ftl/ftl_layout.o 00:05:14.330 CC lib/ftl/ftl_debug.o 00:05:14.330 LIB libspdk_lvol.a 00:05:14.330 CC lib/ftl/ftl_io.o 00:05:14.330 SO libspdk_lvol.so.10.0 00:05:14.330 CC lib/scsi/scsi_bdev.o 00:05:14.590 CC lib/ftl/ftl_sb.o 00:05:14.590 SYMLINK libspdk_lvol.so 00:05:14.590 CC lib/nvmf/ctrlr_discovery.o 00:05:14.590 CC lib/ftl/ftl_l2p.o 00:05:14.590 LIB libspdk_nbd.a 00:05:14.590 SO libspdk_nbd.so.7.0 00:05:14.590 SYMLINK libspdk_nbd.so 00:05:14.590 CC lib/ftl/ftl_l2p_flat.o 00:05:14.590 CC lib/nvmf/ctrlr_bdev.o 00:05:14.590 CC lib/ftl/ftl_nv_cache.o 00:05:14.590 CC lib/ftl/ftl_band.o 00:05:14.847 CC lib/ftl/ftl_band_ops.o 00:05:14.847 CC lib/nvmf/subsystem.o 00:05:14.847 LIB libspdk_ublk.a 00:05:14.847 SO libspdk_ublk.so.3.0 00:05:14.847 CC lib/ftl/ftl_writer.o 00:05:14.847 SYMLINK libspdk_ublk.so 00:05:14.847 CC lib/ftl/ftl_rq.o 00:05:15.105 CC lib/scsi/scsi_pr.o 00:05:15.105 CC lib/nvmf/nvmf.o 00:05:15.105 CC lib/nvmf/nvmf_rpc.o 00:05:15.105 CC lib/nvmf/transport.o 00:05:15.105 CC lib/nvmf/tcp.o 00:05:15.105 CC lib/nvmf/stubs.o 00:05:15.363 CC lib/scsi/scsi_rpc.o 00:05:15.620 CC lib/scsi/task.o 00:05:15.620 CC lib/nvmf/mdns_server.o 00:05:15.620 CC lib/nvmf/rdma.o 00:05:15.878 LIB libspdk_scsi.a 00:05:15.878 SO libspdk_scsi.so.9.0 00:05:15.878 CC lib/ftl/ftl_reloc.o 00:05:15.878 SYMLINK libspdk_scsi.so 00:05:15.878 CC lib/ftl/ftl_l2p_cache.o 00:05:15.878 CC lib/ftl/ftl_p2l.o 00:05:16.136 CC lib/ftl/ftl_p2l_log.o 00:05:16.136 CC lib/nvmf/auth.o 00:05:16.394 CC lib/ftl/mngt/ftl_mngt.o 00:05:16.394 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:16.394 CC lib/iscsi/conn.o 00:05:16.394 CC lib/vhost/vhost.o 00:05:16.394 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:16.652 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:16.652 CC lib/iscsi/init_grp.o 00:05:16.652 CC lib/iscsi/iscsi.o 00:05:16.652 CC lib/iscsi/param.o 00:05:16.652 CC lib/iscsi/portal_grp.o 00:05:16.910 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:16.910 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:16.910 CC lib/iscsi/tgt_node.o 00:05:16.910 CC lib/iscsi/iscsi_subsystem.o 00:05:17.167 CC lib/vhost/vhost_rpc.o 00:05:17.167 CC lib/iscsi/iscsi_rpc.o 00:05:17.167 CC lib/vhost/vhost_scsi.o 00:05:17.167 CC lib/vhost/vhost_blk.o 00:05:17.167 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:17.424 CC lib/vhost/rte_vhost_user.o 00:05:17.424 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:17.424 CC lib/iscsi/task.o 00:05:17.681 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:17.681 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:17.681 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:17.681 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:17.939 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:17.939 CC lib/ftl/utils/ftl_conf.o 00:05:17.939 CC lib/ftl/utils/ftl_md.o 00:05:17.939 CC lib/ftl/utils/ftl_mempool.o 00:05:18.195 CC lib/ftl/utils/ftl_bitmap.o 00:05:18.195 CC lib/ftl/utils/ftl_property.o 00:05:18.195 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:18.195 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:18.195 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:18.195 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:18.195 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:18.452 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:18.452 LIB libspdk_iscsi.a 00:05:18.452 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:18.452 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:18.452 LIB libspdk_vhost.a 00:05:18.452 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:18.452 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:18.452 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:18.452 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:18.452 LIB libspdk_nvmf.a 00:05:18.452 SO libspdk_iscsi.so.8.0 00:05:18.709 SO libspdk_vhost.so.8.0 00:05:18.709 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:18.709 CC lib/ftl/base/ftl_base_dev.o 00:05:18.709 SO libspdk_nvmf.so.19.0 00:05:18.709 SYMLINK libspdk_vhost.so 00:05:18.709 CC lib/ftl/base/ftl_base_bdev.o 00:05:18.709 CC lib/ftl/ftl_trace.o 00:05:18.709 SYMLINK libspdk_iscsi.so 00:05:18.966 SYMLINK libspdk_nvmf.so 00:05:18.966 LIB libspdk_ftl.a 00:05:19.223 SO libspdk_ftl.so.9.0 00:05:19.834 SYMLINK libspdk_ftl.so 00:05:20.092 CC module/env_dpdk/env_dpdk_rpc.o 00:05:20.092 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:20.092 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:20.092 CC module/scheduler/gscheduler/gscheduler.o 00:05:20.092 CC module/sock/posix/posix.o 00:05:20.092 CC module/accel/ioat/accel_ioat.o 00:05:20.092 CC module/accel/error/accel_error.o 00:05:20.092 CC module/blob/bdev/blob_bdev.o 00:05:20.092 CC module/keyring/file/keyring.o 00:05:20.092 CC module/fsdev/aio/fsdev_aio.o 00:05:20.350 LIB libspdk_env_dpdk_rpc.a 00:05:20.350 SO libspdk_env_dpdk_rpc.so.6.0 00:05:20.350 SYMLINK libspdk_env_dpdk_rpc.so 00:05:20.350 CC module/accel/ioat/accel_ioat_rpc.o 00:05:20.350 CC module/keyring/file/keyring_rpc.o 00:05:20.350 LIB libspdk_scheduler_gscheduler.a 00:05:20.350 LIB libspdk_scheduler_dpdk_governor.a 00:05:20.350 SO libspdk_scheduler_gscheduler.so.4.0 00:05:20.350 CC module/accel/error/accel_error_rpc.o 00:05:20.350 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:20.350 LIB libspdk_scheduler_dynamic.a 00:05:20.350 SO libspdk_scheduler_dynamic.so.4.0 00:05:20.350 SYMLINK libspdk_scheduler_gscheduler.so 00:05:20.350 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:20.350 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:20.350 LIB libspdk_accel_ioat.a 00:05:20.606 LIB libspdk_blob_bdev.a 00:05:20.606 SYMLINK libspdk_scheduler_dynamic.so 00:05:20.606 LIB libspdk_keyring_file.a 00:05:20.606 SO libspdk_accel_ioat.so.6.0 00:05:20.606 SO libspdk_blob_bdev.so.11.0 00:05:20.606 LIB libspdk_accel_error.a 00:05:20.606 SO libspdk_keyring_file.so.2.0 00:05:20.606 CC module/keyring/linux/keyring.o 00:05:20.606 SO libspdk_accel_error.so.2.0 00:05:20.606 SYMLINK libspdk_accel_ioat.so 00:05:20.606 SYMLINK libspdk_blob_bdev.so 00:05:20.606 CC module/keyring/linux/keyring_rpc.o 00:05:20.606 SYMLINK libspdk_keyring_file.so 00:05:20.606 CC module/fsdev/aio/linux_aio_mgr.o 00:05:20.606 CC module/accel/dsa/accel_dsa.o 00:05:20.606 SYMLINK libspdk_accel_error.so 00:05:20.606 CC module/accel/dsa/accel_dsa_rpc.o 00:05:20.606 CC module/accel/iaa/accel_iaa.o 00:05:20.863 LIB libspdk_keyring_linux.a 00:05:20.863 SO libspdk_keyring_linux.so.1.0 00:05:20.863 CC module/accel/iaa/accel_iaa_rpc.o 00:05:20.863 SYMLINK libspdk_keyring_linux.so 00:05:20.863 CC module/bdev/delay/vbdev_delay.o 00:05:20.863 CC module/blobfs/bdev/blobfs_bdev.o 00:05:20.863 CC module/bdev/error/vbdev_error.o 00:05:20.863 LIB libspdk_fsdev_aio.a 00:05:20.863 LIB libspdk_accel_iaa.a 00:05:20.863 CC module/bdev/gpt/gpt.o 00:05:21.121 SO libspdk_fsdev_aio.so.1.0 00:05:21.121 LIB libspdk_accel_dsa.a 00:05:21.121 SO libspdk_accel_iaa.so.3.0 00:05:21.121 SO libspdk_accel_dsa.so.5.0 00:05:21.121 CC module/bdev/lvol/vbdev_lvol.o 00:05:21.121 LIB libspdk_sock_posix.a 00:05:21.121 SYMLINK libspdk_accel_iaa.so 00:05:21.121 SYMLINK libspdk_fsdev_aio.so 00:05:21.121 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:21.121 CC module/bdev/gpt/vbdev_gpt.o 00:05:21.121 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:21.121 SYMLINK libspdk_accel_dsa.so 00:05:21.121 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:21.121 CC module/bdev/malloc/bdev_malloc.o 00:05:21.121 SO libspdk_sock_posix.so.6.0 00:05:21.121 CC module/bdev/error/vbdev_error_rpc.o 00:05:21.121 SYMLINK libspdk_sock_posix.so 00:05:21.379 LIB libspdk_blobfs_bdev.a 00:05:21.379 LIB libspdk_bdev_delay.a 00:05:21.379 SO libspdk_blobfs_bdev.so.6.0 00:05:21.379 SO libspdk_bdev_delay.so.6.0 00:05:21.379 LIB libspdk_bdev_error.a 00:05:21.379 SYMLINK libspdk_blobfs_bdev.so 00:05:21.379 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:21.379 CC module/bdev/null/bdev_null.o 00:05:21.379 SYMLINK libspdk_bdev_delay.so 00:05:21.379 CC module/bdev/null/bdev_null_rpc.o 00:05:21.379 CC module/bdev/nvme/bdev_nvme.o 00:05:21.379 LIB libspdk_bdev_gpt.a 00:05:21.379 SO libspdk_bdev_error.so.6.0 00:05:21.379 SO libspdk_bdev_gpt.so.6.0 00:05:21.379 CC module/bdev/passthru/vbdev_passthru.o 00:05:21.636 SYMLINK libspdk_bdev_gpt.so 00:05:21.636 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:21.636 CC module/bdev/nvme/nvme_rpc.o 00:05:21.636 SYMLINK libspdk_bdev_error.so 00:05:21.636 CC module/bdev/nvme/bdev_mdns_client.o 00:05:21.636 LIB libspdk_bdev_malloc.a 00:05:21.636 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:21.636 SO libspdk_bdev_malloc.so.6.0 00:05:21.636 SYMLINK libspdk_bdev_malloc.so 00:05:21.636 LIB libspdk_bdev_lvol.a 00:05:21.894 LIB libspdk_bdev_null.a 00:05:21.894 CC module/bdev/raid/bdev_raid.o 00:05:21.894 CC module/bdev/nvme/vbdev_opal.o 00:05:21.894 SO libspdk_bdev_lvol.so.6.0 00:05:21.894 SO libspdk_bdev_null.so.6.0 00:05:21.894 CC module/bdev/raid/bdev_raid_rpc.o 00:05:21.894 LIB libspdk_bdev_passthru.a 00:05:21.894 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:21.894 SYMLINK libspdk_bdev_null.so 00:05:21.894 SO libspdk_bdev_passthru.so.6.0 00:05:21.894 SYMLINK libspdk_bdev_lvol.so 00:05:21.894 CC module/bdev/raid/bdev_raid_sb.o 00:05:21.894 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:21.894 CC module/bdev/split/vbdev_split.o 00:05:21.894 SYMLINK libspdk_bdev_passthru.so 00:05:21.894 CC module/bdev/split/vbdev_split_rpc.o 00:05:22.156 CC module/bdev/raid/raid0.o 00:05:22.156 CC module/bdev/raid/raid1.o 00:05:22.156 LIB libspdk_bdev_split.a 00:05:22.156 CC module/bdev/raid/concat.o 00:05:22.156 SO libspdk_bdev_split.so.6.0 00:05:22.156 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:22.156 CC module/bdev/xnvme/bdev_xnvme.o 00:05:22.415 CC module/bdev/aio/bdev_aio.o 00:05:22.415 SYMLINK libspdk_bdev_split.so 00:05:22.415 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:22.415 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:22.415 CC module/bdev/aio/bdev_aio_rpc.o 00:05:22.415 LIB libspdk_bdev_xnvme.a 00:05:22.673 CC module/bdev/ftl/bdev_ftl.o 00:05:22.673 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:22.673 SO libspdk_bdev_xnvme.so.3.0 00:05:22.673 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:22.673 CC module/bdev/iscsi/bdev_iscsi.o 00:05:22.673 SYMLINK libspdk_bdev_xnvme.so 00:05:22.673 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:22.673 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:22.673 LIB libspdk_bdev_zone_block.a 00:05:22.673 LIB libspdk_bdev_aio.a 00:05:22.673 SO libspdk_bdev_zone_block.so.6.0 00:05:22.673 SO libspdk_bdev_aio.so.6.0 00:05:22.673 SYMLINK libspdk_bdev_zone_block.so 00:05:22.932 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:22.932 SYMLINK libspdk_bdev_aio.so 00:05:22.932 LIB libspdk_bdev_ftl.a 00:05:22.932 LIB libspdk_bdev_raid.a 00:05:22.932 SO libspdk_bdev_ftl.so.6.0 00:05:22.932 SO libspdk_bdev_raid.so.6.0 00:05:22.932 SYMLINK libspdk_bdev_ftl.so 00:05:23.192 LIB libspdk_bdev_iscsi.a 00:05:23.192 SYMLINK libspdk_bdev_raid.so 00:05:23.192 SO libspdk_bdev_iscsi.so.6.0 00:05:23.192 SYMLINK libspdk_bdev_iscsi.so 00:05:23.192 LIB libspdk_bdev_virtio.a 00:05:23.451 SO libspdk_bdev_virtio.so.6.0 00:05:23.451 SYMLINK libspdk_bdev_virtio.so 00:05:24.387 LIB libspdk_bdev_nvme.a 00:05:24.387 SO libspdk_bdev_nvme.so.7.0 00:05:24.387 SYMLINK libspdk_bdev_nvme.so 00:05:25.322 CC module/event/subsystems/iobuf/iobuf.o 00:05:25.322 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:25.322 CC module/event/subsystems/vmd/vmd.o 00:05:25.322 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:25.322 CC module/event/subsystems/sock/sock.o 00:05:25.322 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:25.322 CC module/event/subsystems/keyring/keyring.o 00:05:25.322 CC module/event/subsystems/scheduler/scheduler.o 00:05:25.322 CC module/event/subsystems/fsdev/fsdev.o 00:05:25.322 LIB libspdk_event_vhost_blk.a 00:05:25.322 LIB libspdk_event_sock.a 00:05:25.322 LIB libspdk_event_vmd.a 00:05:25.322 LIB libspdk_event_iobuf.a 00:05:25.322 LIB libspdk_event_scheduler.a 00:05:25.322 SO libspdk_event_vhost_blk.so.3.0 00:05:25.322 SO libspdk_event_sock.so.5.0 00:05:25.322 LIB libspdk_event_keyring.a 00:05:25.322 SO libspdk_event_vmd.so.6.0 00:05:25.322 LIB libspdk_event_fsdev.a 00:05:25.322 SO libspdk_event_scheduler.so.4.0 00:05:25.322 SO libspdk_event_iobuf.so.3.0 00:05:25.322 SO libspdk_event_keyring.so.1.0 00:05:25.322 SO libspdk_event_fsdev.so.1.0 00:05:25.322 SYMLINK libspdk_event_sock.so 00:05:25.322 SYMLINK libspdk_event_vhost_blk.so 00:05:25.322 SYMLINK libspdk_event_vmd.so 00:05:25.322 SYMLINK libspdk_event_keyring.so 00:05:25.322 SYMLINK libspdk_event_iobuf.so 00:05:25.322 SYMLINK libspdk_event_fsdev.so 00:05:25.322 SYMLINK libspdk_event_scheduler.so 00:05:25.889 CC module/event/subsystems/accel/accel.o 00:05:26.147 LIB libspdk_event_accel.a 00:05:26.147 SO libspdk_event_accel.so.6.0 00:05:26.147 SYMLINK libspdk_event_accel.so 00:05:26.715 CC module/event/subsystems/bdev/bdev.o 00:05:26.715 LIB libspdk_event_bdev.a 00:05:26.974 SO libspdk_event_bdev.so.6.0 00:05:26.974 SYMLINK libspdk_event_bdev.so 00:05:27.233 CC module/event/subsystems/ublk/ublk.o 00:05:27.233 CC module/event/subsystems/nbd/nbd.o 00:05:27.233 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:27.233 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:27.233 CC module/event/subsystems/scsi/scsi.o 00:05:27.491 LIB libspdk_event_ublk.a 00:05:27.491 LIB libspdk_event_nbd.a 00:05:27.491 SO libspdk_event_ublk.so.3.0 00:05:27.491 SO libspdk_event_nbd.so.6.0 00:05:27.491 LIB libspdk_event_scsi.a 00:05:27.491 LIB libspdk_event_nvmf.a 00:05:27.491 SO libspdk_event_scsi.so.6.0 00:05:27.491 SYMLINK libspdk_event_ublk.so 00:05:27.491 SYMLINK libspdk_event_nbd.so 00:05:27.750 SO libspdk_event_nvmf.so.6.0 00:05:27.750 SYMLINK libspdk_event_scsi.so 00:05:27.750 SYMLINK libspdk_event_nvmf.so 00:05:28.009 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:28.009 CC module/event/subsystems/iscsi/iscsi.o 00:05:28.268 LIB libspdk_event_vhost_scsi.a 00:05:28.268 LIB libspdk_event_iscsi.a 00:05:28.268 SO libspdk_event_vhost_scsi.so.3.0 00:05:28.268 SO libspdk_event_iscsi.so.6.0 00:05:28.527 SYMLINK libspdk_event_vhost_scsi.so 00:05:28.527 SYMLINK libspdk_event_iscsi.so 00:05:28.527 SO libspdk.so.6.0 00:05:28.786 SYMLINK libspdk.so 00:05:29.046 CC test/rpc_client/rpc_client_test.o 00:05:29.046 TEST_HEADER include/spdk/accel.h 00:05:29.046 TEST_HEADER include/spdk/accel_module.h 00:05:29.046 TEST_HEADER include/spdk/assert.h 00:05:29.046 TEST_HEADER include/spdk/barrier.h 00:05:29.046 TEST_HEADER include/spdk/base64.h 00:05:29.046 TEST_HEADER include/spdk/bdev.h 00:05:29.046 TEST_HEADER include/spdk/bdev_module.h 00:05:29.046 TEST_HEADER include/spdk/bdev_zone.h 00:05:29.046 CXX app/trace/trace.o 00:05:29.046 TEST_HEADER include/spdk/bit_array.h 00:05:29.046 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:29.046 TEST_HEADER include/spdk/bit_pool.h 00:05:29.046 TEST_HEADER include/spdk/blob_bdev.h 00:05:29.046 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:29.046 TEST_HEADER include/spdk/blobfs.h 00:05:29.046 TEST_HEADER include/spdk/blob.h 00:05:29.046 TEST_HEADER include/spdk/conf.h 00:05:29.046 TEST_HEADER include/spdk/config.h 00:05:29.046 TEST_HEADER include/spdk/cpuset.h 00:05:29.046 TEST_HEADER include/spdk/crc16.h 00:05:29.046 TEST_HEADER include/spdk/crc32.h 00:05:29.046 TEST_HEADER include/spdk/crc64.h 00:05:29.046 TEST_HEADER include/spdk/dif.h 00:05:29.046 TEST_HEADER include/spdk/dma.h 00:05:29.046 TEST_HEADER include/spdk/endian.h 00:05:29.046 TEST_HEADER include/spdk/env_dpdk.h 00:05:29.046 TEST_HEADER include/spdk/env.h 00:05:29.046 TEST_HEADER include/spdk/event.h 00:05:29.046 TEST_HEADER include/spdk/fd_group.h 00:05:29.046 TEST_HEADER include/spdk/fd.h 00:05:29.046 CC examples/util/zipf/zipf.o 00:05:29.046 TEST_HEADER include/spdk/file.h 00:05:29.046 TEST_HEADER include/spdk/fsdev.h 00:05:29.046 TEST_HEADER include/spdk/fsdev_module.h 00:05:29.046 CC examples/ioat/perf/perf.o 00:05:29.047 TEST_HEADER include/spdk/ftl.h 00:05:29.047 CC test/thread/poller_perf/poller_perf.o 00:05:29.047 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:29.047 TEST_HEADER include/spdk/gpt_spec.h 00:05:29.047 TEST_HEADER include/spdk/hexlify.h 00:05:29.047 CC test/dma/test_dma/test_dma.o 00:05:29.047 TEST_HEADER include/spdk/histogram_data.h 00:05:29.305 TEST_HEADER include/spdk/idxd.h 00:05:29.305 TEST_HEADER include/spdk/idxd_spec.h 00:05:29.305 CC test/app/bdev_svc/bdev_svc.o 00:05:29.305 TEST_HEADER include/spdk/init.h 00:05:29.305 TEST_HEADER include/spdk/ioat.h 00:05:29.305 TEST_HEADER include/spdk/ioat_spec.h 00:05:29.305 TEST_HEADER include/spdk/iscsi_spec.h 00:05:29.305 TEST_HEADER include/spdk/json.h 00:05:29.305 TEST_HEADER include/spdk/jsonrpc.h 00:05:29.305 TEST_HEADER include/spdk/keyring.h 00:05:29.305 TEST_HEADER include/spdk/keyring_module.h 00:05:29.305 TEST_HEADER include/spdk/likely.h 00:05:29.305 TEST_HEADER include/spdk/log.h 00:05:29.305 TEST_HEADER include/spdk/lvol.h 00:05:29.305 TEST_HEADER include/spdk/md5.h 00:05:29.305 TEST_HEADER include/spdk/memory.h 00:05:29.305 TEST_HEADER include/spdk/mmio.h 00:05:29.305 TEST_HEADER include/spdk/nbd.h 00:05:29.305 TEST_HEADER include/spdk/net.h 00:05:29.305 TEST_HEADER include/spdk/notify.h 00:05:29.305 TEST_HEADER include/spdk/nvme.h 00:05:29.305 LINK rpc_client_test 00:05:29.305 TEST_HEADER include/spdk/nvme_intel.h 00:05:29.305 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:29.305 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:29.305 TEST_HEADER include/spdk/nvme_spec.h 00:05:29.305 TEST_HEADER include/spdk/nvme_zns.h 00:05:29.305 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:29.305 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:29.305 TEST_HEADER include/spdk/nvmf.h 00:05:29.305 LINK interrupt_tgt 00:05:29.305 TEST_HEADER include/spdk/nvmf_spec.h 00:05:29.305 TEST_HEADER include/spdk/nvmf_transport.h 00:05:29.305 CC test/env/mem_callbacks/mem_callbacks.o 00:05:29.305 TEST_HEADER include/spdk/opal.h 00:05:29.305 TEST_HEADER include/spdk/opal_spec.h 00:05:29.305 TEST_HEADER include/spdk/pci_ids.h 00:05:29.305 TEST_HEADER include/spdk/pipe.h 00:05:29.305 TEST_HEADER include/spdk/queue.h 00:05:29.305 TEST_HEADER include/spdk/reduce.h 00:05:29.305 TEST_HEADER include/spdk/rpc.h 00:05:29.305 TEST_HEADER include/spdk/scheduler.h 00:05:29.305 TEST_HEADER include/spdk/scsi.h 00:05:29.306 TEST_HEADER include/spdk/scsi_spec.h 00:05:29.306 TEST_HEADER include/spdk/sock.h 00:05:29.306 TEST_HEADER include/spdk/stdinc.h 00:05:29.306 LINK zipf 00:05:29.306 TEST_HEADER include/spdk/string.h 00:05:29.306 TEST_HEADER include/spdk/thread.h 00:05:29.306 TEST_HEADER include/spdk/trace.h 00:05:29.306 TEST_HEADER include/spdk/trace_parser.h 00:05:29.306 TEST_HEADER include/spdk/tree.h 00:05:29.306 TEST_HEADER include/spdk/ublk.h 00:05:29.306 LINK poller_perf 00:05:29.306 TEST_HEADER include/spdk/util.h 00:05:29.306 TEST_HEADER include/spdk/uuid.h 00:05:29.306 TEST_HEADER include/spdk/version.h 00:05:29.306 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:29.306 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:29.306 TEST_HEADER include/spdk/vhost.h 00:05:29.306 TEST_HEADER include/spdk/vmd.h 00:05:29.306 TEST_HEADER include/spdk/xor.h 00:05:29.306 TEST_HEADER include/spdk/zipf.h 00:05:29.306 CXX test/cpp_headers/accel.o 00:05:29.306 LINK bdev_svc 00:05:29.306 LINK ioat_perf 00:05:29.306 CXX test/cpp_headers/accel_module.o 00:05:29.566 LINK spdk_trace 00:05:29.566 CXX test/cpp_headers/assert.o 00:05:29.566 CXX test/cpp_headers/barrier.o 00:05:29.566 CXX test/cpp_headers/base64.o 00:05:29.825 CC examples/ioat/verify/verify.o 00:05:29.825 CC test/env/vtophys/vtophys.o 00:05:29.825 CC test/app/histogram_perf/histogram_perf.o 00:05:29.825 CC app/trace_record/trace_record.o 00:05:29.825 CXX test/cpp_headers/bdev.o 00:05:29.825 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:29.825 LINK test_dma 00:05:29.825 LINK mem_callbacks 00:05:29.825 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:29.825 LINK vtophys 00:05:30.083 LINK histogram_perf 00:05:30.083 LINK verify 00:05:30.083 CC examples/thread/thread/thread_ex.o 00:05:30.083 CXX test/cpp_headers/bdev_module.o 00:05:30.083 CXX test/cpp_headers/bdev_zone.o 00:05:30.083 CXX test/cpp_headers/bit_array.o 00:05:30.083 LINK spdk_trace_record 00:05:30.342 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:30.342 CXX test/cpp_headers/bit_pool.o 00:05:30.342 LINK thread 00:05:30.342 LINK nvme_fuzz 00:05:30.342 CC test/app/jsoncat/jsoncat.o 00:05:30.342 CC test/event/event_perf/event_perf.o 00:05:30.342 CC test/app/stub/stub.o 00:05:30.602 CC test/nvme/aer/aer.o 00:05:30.602 LINK env_dpdk_post_init 00:05:30.602 CC app/nvmf_tgt/nvmf_main.o 00:05:30.602 CXX test/cpp_headers/blob_bdev.o 00:05:30.602 LINK jsoncat 00:05:30.602 CXX test/cpp_headers/blobfs_bdev.o 00:05:30.602 LINK event_perf 00:05:30.602 LINK stub 00:05:30.859 LINK nvmf_tgt 00:05:30.859 CXX test/cpp_headers/blobfs.o 00:05:30.859 CC test/env/memory/memory_ut.o 00:05:30.859 CXX test/cpp_headers/blob.o 00:05:30.859 LINK aer 00:05:30.859 CC examples/sock/hello_world/hello_sock.o 00:05:30.859 CC test/event/reactor/reactor.o 00:05:31.118 CC examples/vmd/lsvmd/lsvmd.o 00:05:31.118 CXX test/cpp_headers/conf.o 00:05:31.118 CC app/iscsi_tgt/iscsi_tgt.o 00:05:31.118 CXX test/cpp_headers/config.o 00:05:31.118 CC test/nvme/reset/reset.o 00:05:31.118 LINK reactor 00:05:31.118 CXX test/cpp_headers/cpuset.o 00:05:31.118 LINK lsvmd 00:05:31.118 LINK iscsi_tgt 00:05:31.375 LINK hello_sock 00:05:31.375 CC test/accel/dif/dif.o 00:05:31.375 CC test/blobfs/mkfs/mkfs.o 00:05:31.375 CXX test/cpp_headers/crc16.o 00:05:31.375 CC test/event/reactor_perf/reactor_perf.o 00:05:31.375 CC examples/vmd/led/led.o 00:05:31.375 LINK reset 00:05:31.632 LINK mkfs 00:05:31.632 CXX test/cpp_headers/crc32.o 00:05:31.632 LINK reactor_perf 00:05:31.632 LINK led 00:05:31.632 CC app/spdk_tgt/spdk_tgt.o 00:05:31.632 CXX test/cpp_headers/crc64.o 00:05:31.890 CC test/nvme/sgl/sgl.o 00:05:31.890 CC test/lvol/esnap/esnap.o 00:05:31.890 LINK spdk_tgt 00:05:31.890 CXX test/cpp_headers/dif.o 00:05:31.890 LINK iscsi_fuzz 00:05:31.890 CC examples/idxd/perf/perf.o 00:05:31.890 CC test/event/app_repeat/app_repeat.o 00:05:31.890 LINK memory_ut 00:05:31.890 CC test/nvme/e2edp/nvme_dp.o 00:05:32.147 LINK sgl 00:05:32.147 LINK dif 00:05:32.147 CXX test/cpp_headers/dma.o 00:05:32.147 LINK app_repeat 00:05:32.404 LINK nvme_dp 00:05:32.404 CC test/env/pci/pci_ut.o 00:05:32.404 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:32.404 CC app/spdk_lspci/spdk_lspci.o 00:05:32.404 CXX test/cpp_headers/endian.o 00:05:32.404 CC test/event/scheduler/scheduler.o 00:05:32.404 CC test/nvme/overhead/overhead.o 00:05:32.404 LINK idxd_perf 00:05:32.404 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:32.404 CC test/nvme/err_injection/err_injection.o 00:05:32.404 LINK spdk_lspci 00:05:32.662 CXX test/cpp_headers/env_dpdk.o 00:05:32.662 LINK scheduler 00:05:32.662 CC test/bdev/bdevio/bdevio.o 00:05:32.662 LINK overhead 00:05:32.662 LINK err_injection 00:05:32.662 CXX test/cpp_headers/env.o 00:05:32.662 LINK pci_ut 00:05:32.662 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:32.662 CC app/spdk_nvme_perf/perf.o 00:05:32.919 CC test/nvme/startup/startup.o 00:05:32.919 LINK vhost_fuzz 00:05:32.919 CXX test/cpp_headers/event.o 00:05:32.919 CC test/nvme/reserve/reserve.o 00:05:32.919 CC test/nvme/simple_copy/simple_copy.o 00:05:33.177 LINK startup 00:05:33.177 CXX test/cpp_headers/fd_group.o 00:05:33.177 LINK bdevio 00:05:33.177 CXX test/cpp_headers/fd.o 00:05:33.177 LINK hello_fsdev 00:05:33.177 CC test/nvme/connect_stress/connect_stress.o 00:05:33.177 LINK reserve 00:05:33.435 CXX test/cpp_headers/file.o 00:05:33.435 LINK simple_copy 00:05:33.435 CC test/nvme/boot_partition/boot_partition.o 00:05:33.435 LINK connect_stress 00:05:33.435 CC test/nvme/compliance/nvme_compliance.o 00:05:33.435 CC examples/accel/perf/accel_perf.o 00:05:33.435 CC test/nvme/fused_ordering/fused_ordering.o 00:05:33.435 CC app/spdk_nvme_identify/identify.o 00:05:33.435 CXX test/cpp_headers/fsdev.o 00:05:33.693 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:33.693 CXX test/cpp_headers/fsdev_module.o 00:05:33.693 LINK boot_partition 00:05:33.693 LINK fused_ordering 00:05:33.693 LINK spdk_nvme_perf 00:05:33.693 CXX test/cpp_headers/ftl.o 00:05:33.693 CC test/nvme/fdp/fdp.o 00:05:33.693 LINK doorbell_aers 00:05:33.693 LINK nvme_compliance 00:05:33.952 CC test/nvme/cuse/cuse.o 00:05:33.952 CXX test/cpp_headers/fuse_dispatcher.o 00:05:33.952 CXX test/cpp_headers/gpt_spec.o 00:05:33.952 CXX test/cpp_headers/hexlify.o 00:05:33.952 LINK accel_perf 00:05:33.952 CXX test/cpp_headers/histogram_data.o 00:05:34.210 CXX test/cpp_headers/idxd.o 00:05:34.210 CXX test/cpp_headers/idxd_spec.o 00:05:34.210 CC examples/blob/hello_world/hello_blob.o 00:05:34.210 LINK fdp 00:05:34.210 CC examples/nvme/hello_world/hello_world.o 00:05:34.210 CC examples/nvme/reconnect/reconnect.o 00:05:34.210 CXX test/cpp_headers/init.o 00:05:34.490 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:34.490 CC examples/bdev/hello_world/hello_bdev.o 00:05:34.490 LINK spdk_nvme_identify 00:05:34.490 LINK hello_blob 00:05:34.490 CXX test/cpp_headers/ioat.o 00:05:34.490 CC examples/nvme/arbitration/arbitration.o 00:05:34.490 LINK hello_world 00:05:34.752 LINK hello_bdev 00:05:34.752 LINK reconnect 00:05:34.752 CC app/spdk_nvme_discover/discovery_aer.o 00:05:34.752 CXX test/cpp_headers/ioat_spec.o 00:05:35.011 LINK spdk_nvme_discover 00:05:35.011 CC examples/blob/cli/blobcli.o 00:05:35.011 LINK arbitration 00:05:35.011 CC app/spdk_top/spdk_top.o 00:05:35.011 CC examples/nvme/hotplug/hotplug.o 00:05:35.011 CXX test/cpp_headers/iscsi_spec.o 00:05:35.011 CC examples/bdev/bdevperf/bdevperf.o 00:05:35.011 LINK nvme_manage 00:05:35.271 LINK cuse 00:05:35.271 CXX test/cpp_headers/json.o 00:05:35.271 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:35.271 LINK hotplug 00:05:35.271 CC app/vhost/vhost.o 00:05:35.271 CXX test/cpp_headers/jsonrpc.o 00:05:35.271 CXX test/cpp_headers/keyring.o 00:05:35.271 LINK cmb_copy 00:05:35.271 LINK vhost 00:05:35.271 CC examples/nvme/abort/abort.o 00:05:35.530 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:35.530 CXX test/cpp_headers/keyring_module.o 00:05:35.530 LINK blobcli 00:05:35.530 CXX test/cpp_headers/likely.o 00:05:35.530 LINK pmr_persistence 00:05:35.530 CC app/spdk_dd/spdk_dd.o 00:05:35.788 CXX test/cpp_headers/log.o 00:05:35.788 CC app/fio/nvme/fio_plugin.o 00:05:35.788 CXX test/cpp_headers/lvol.o 00:05:35.788 CXX test/cpp_headers/md5.o 00:05:35.788 LINK bdevperf 00:05:35.788 CC app/fio/bdev/fio_plugin.o 00:05:35.789 LINK spdk_top 00:05:35.789 CXX test/cpp_headers/memory.o 00:05:36.047 LINK abort 00:05:36.047 CXX test/cpp_headers/mmio.o 00:05:36.047 CXX test/cpp_headers/nbd.o 00:05:36.047 CXX test/cpp_headers/net.o 00:05:36.047 LINK spdk_dd 00:05:36.047 CXX test/cpp_headers/notify.o 00:05:36.047 CXX test/cpp_headers/nvme.o 00:05:36.047 CXX test/cpp_headers/nvme_intel.o 00:05:36.047 CXX test/cpp_headers/nvme_ocssd.o 00:05:36.305 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:36.305 CXX test/cpp_headers/nvme_spec.o 00:05:36.305 CXX test/cpp_headers/nvme_zns.o 00:05:36.305 CXX test/cpp_headers/nvmf_cmd.o 00:05:36.305 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:36.305 CC examples/nvmf/nvmf/nvmf.o 00:05:36.305 CXX test/cpp_headers/nvmf.o 00:05:36.305 LINK spdk_bdev 00:05:36.305 CXX test/cpp_headers/nvmf_spec.o 00:05:36.305 CXX test/cpp_headers/nvmf_transport.o 00:05:36.305 CXX test/cpp_headers/opal.o 00:05:36.564 CXX test/cpp_headers/opal_spec.o 00:05:36.564 LINK spdk_nvme 00:05:36.564 CXX test/cpp_headers/pci_ids.o 00:05:36.564 CXX test/cpp_headers/pipe.o 00:05:36.564 CXX test/cpp_headers/queue.o 00:05:36.564 CXX test/cpp_headers/reduce.o 00:05:36.564 CXX test/cpp_headers/rpc.o 00:05:36.564 CXX test/cpp_headers/scheduler.o 00:05:36.564 LINK nvmf 00:05:36.564 CXX test/cpp_headers/scsi.o 00:05:36.564 CXX test/cpp_headers/scsi_spec.o 00:05:36.564 CXX test/cpp_headers/sock.o 00:05:36.564 CXX test/cpp_headers/stdinc.o 00:05:36.564 CXX test/cpp_headers/string.o 00:05:36.823 CXX test/cpp_headers/thread.o 00:05:36.823 CXX test/cpp_headers/trace.o 00:05:36.823 CXX test/cpp_headers/trace_parser.o 00:05:36.823 CXX test/cpp_headers/tree.o 00:05:36.823 CXX test/cpp_headers/ublk.o 00:05:36.823 CXX test/cpp_headers/util.o 00:05:36.823 CXX test/cpp_headers/uuid.o 00:05:36.823 CXX test/cpp_headers/version.o 00:05:36.823 CXX test/cpp_headers/vfio_user_pci.o 00:05:36.823 CXX test/cpp_headers/vfio_user_spec.o 00:05:36.823 CXX test/cpp_headers/vhost.o 00:05:36.823 CXX test/cpp_headers/vmd.o 00:05:36.823 CXX test/cpp_headers/xor.o 00:05:36.823 CXX test/cpp_headers/zipf.o 00:05:37.760 LINK esnap 00:05:38.329 00:05:38.329 real 1m26.223s 00:05:38.329 user 6m34.302s 00:05:38.329 sys 1m32.899s 00:05:38.329 15:05:36 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:38.329 15:05:36 make -- common/autotest_common.sh@10 -- $ set +x 00:05:38.329 ************************************ 00:05:38.329 END TEST make 00:05:38.329 ************************************ 00:05:38.329 15:05:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:38.329 15:05:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:38.329 15:05:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:38.329 15:05:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.329 15:05:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:38.329 15:05:36 -- pm/common@44 -- $ pid=6014 00:05:38.329 15:05:36 -- pm/common@50 -- $ kill -TERM 6014 00:05:38.329 15:05:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.329 15:05:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:38.329 15:05:36 -- pm/common@44 -- $ pid=6015 00:05:38.329 15:05:36 -- pm/common@50 -- $ kill -TERM 6015 00:05:38.329 15:05:36 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:38.329 15:05:36 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:38.329 15:05:36 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:38.329 15:05:36 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:38.329 15:05:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.329 15:05:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.329 15:05:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.329 15:05:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.329 15:05:36 -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.329 15:05:36 -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.329 15:05:36 -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.329 15:05:36 -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.329 15:05:36 -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.589 15:05:36 -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.589 15:05:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.589 15:05:36 -- scripts/common.sh@344 -- # case "$op" in 00:05:38.589 15:05:36 -- scripts/common.sh@345 -- # : 1 00:05:38.589 15:05:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.589 15:05:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.589 15:05:36 -- scripts/common.sh@365 -- # decimal 1 00:05:38.589 15:05:36 -- scripts/common.sh@353 -- # local d=1 00:05:38.589 15:05:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.589 15:05:36 -- scripts/common.sh@355 -- # echo 1 00:05:38.589 15:05:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.589 15:05:36 -- scripts/common.sh@366 -- # decimal 2 00:05:38.589 15:05:36 -- scripts/common.sh@353 -- # local d=2 00:05:38.589 15:05:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.589 15:05:36 -- scripts/common.sh@355 -- # echo 2 00:05:38.589 15:05:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.589 15:05:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.589 15:05:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.589 15:05:36 -- scripts/common.sh@368 -- # return 0 00:05:38.589 15:05:36 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.589 15:05:36 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:38.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.589 --rc genhtml_branch_coverage=1 00:05:38.589 --rc genhtml_function_coverage=1 00:05:38.589 --rc genhtml_legend=1 00:05:38.589 --rc geninfo_all_blocks=1 00:05:38.589 --rc geninfo_unexecuted_blocks=1 00:05:38.589 00:05:38.589 ' 00:05:38.589 15:05:36 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:38.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.589 --rc genhtml_branch_coverage=1 00:05:38.589 --rc genhtml_function_coverage=1 00:05:38.589 --rc genhtml_legend=1 00:05:38.589 --rc geninfo_all_blocks=1 00:05:38.589 --rc geninfo_unexecuted_blocks=1 00:05:38.589 00:05:38.589 ' 00:05:38.589 15:05:36 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:38.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.589 --rc genhtml_branch_coverage=1 00:05:38.589 --rc genhtml_function_coverage=1 00:05:38.589 --rc genhtml_legend=1 00:05:38.589 --rc geninfo_all_blocks=1 00:05:38.589 --rc geninfo_unexecuted_blocks=1 00:05:38.589 00:05:38.589 ' 00:05:38.589 15:05:36 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:38.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.589 --rc genhtml_branch_coverage=1 00:05:38.589 --rc genhtml_function_coverage=1 00:05:38.589 --rc genhtml_legend=1 00:05:38.589 --rc geninfo_all_blocks=1 00:05:38.589 --rc geninfo_unexecuted_blocks=1 00:05:38.589 00:05:38.589 ' 00:05:38.589 15:05:36 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:38.589 15:05:36 -- nvmf/common.sh@7 -- # uname -s 00:05:38.589 15:05:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.589 15:05:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.589 15:05:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.589 15:05:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.589 15:05:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.589 15:05:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.589 15:05:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.589 15:05:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.589 15:05:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.589 15:05:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.589 15:05:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5c618085-b29b-41c6-81e1-184c2b306579 00:05:38.589 15:05:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5c618085-b29b-41c6-81e1-184c2b306579 00:05:38.589 15:05:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.589 15:05:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.589 15:05:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.589 15:05:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.590 15:05:36 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.590 15:05:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.590 15:05:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.590 15:05:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.590 15:05:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.590 15:05:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.590 15:05:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.590 15:05:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.590 15:05:36 -- paths/export.sh@5 -- # export PATH 00:05:38.590 15:05:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.590 15:05:36 -- nvmf/common.sh@51 -- # : 0 00:05:38.590 15:05:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:38.590 15:05:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:38.590 15:05:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.590 15:05:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.590 15:05:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.590 15:05:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:38.590 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:38.590 15:05:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:38.590 15:05:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:38.590 15:05:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:38.590 15:05:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:38.590 15:05:36 -- spdk/autotest.sh@32 -- # uname -s 00:05:38.590 15:05:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:38.590 15:05:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:38.590 15:05:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:38.590 15:05:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:38.590 15:05:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:38.590 15:05:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:38.590 15:05:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:38.590 15:05:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:38.590 15:05:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:38.590 15:05:37 -- spdk/autotest.sh@48 -- # udevadm_pid=67700 00:05:38.590 15:05:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:38.590 15:05:37 -- pm/common@17 -- # local monitor 00:05:38.590 15:05:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.590 15:05:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.590 15:05:37 -- pm/common@25 -- # sleep 1 00:05:38.590 15:05:37 -- pm/common@21 -- # date +%s 00:05:38.590 15:05:37 -- pm/common@21 -- # date +%s 00:05:38.590 15:05:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727795137 00:05:38.590 15:05:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727795137 00:05:38.590 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727795137_collect-cpu-load.pm.log 00:05:38.590 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727795137_collect-vmstat.pm.log 00:05:39.629 15:05:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:39.629 15:05:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:39.629 15:05:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.629 15:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:39.629 15:05:38 -- spdk/autotest.sh@59 -- # create_test_list 00:05:39.629 15:05:38 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:39.629 15:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:39.629 15:05:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:39.629 15:05:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:39.629 15:05:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:39.629 15:05:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:39.629 15:05:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:39.629 15:05:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:39.629 15:05:38 -- common/autotest_common.sh@1455 -- # uname 00:05:39.629 15:05:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:39.629 15:05:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:39.629 15:05:38 -- common/autotest_common.sh@1475 -- # uname 00:05:39.629 15:05:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:39.629 15:05:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:39.629 15:05:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:39.888 lcov: LCOV version 1.15 00:05:39.888 15:05:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:54.770 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:54.770 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:12.893 15:06:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:12.893 15:06:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:12.893 15:06:09 -- common/autotest_common.sh@10 -- # set +x 00:06:12.893 15:06:09 -- spdk/autotest.sh@78 -- # rm -f 00:06:12.894 15:06:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:12.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:12.894 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:12.894 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:12.894 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:12.894 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:12.894 15:06:11 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:12.894 15:06:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:12.894 15:06:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:12.894 15:06:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:12.894 15:06:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.894 15:06:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:12.894 15:06:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:12.894 15:06:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.894 15:06:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:12.894 15:06:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:12.894 15:06:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.894 15:06:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:12.894 15:06:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:12.894 15:06:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.894 15:06:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:12.894 15:06:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:12.894 15:06:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.894 15:06:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:12.894 15:06:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:12.894 15:06:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.894 15:06:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:12.894 15:06:11 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:12.894 15:06:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:12.894 15:06:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:12.894 15:06:11 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:12.894 15:06:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:12.894 15:06:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:12.894 15:06:11 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:12.894 15:06:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.894 15:06:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.894 15:06:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:12.894 15:06:11 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:12.894 15:06:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:12.894 No valid GPT data, bailing 00:06:12.894 15:06:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:12.894 15:06:11 -- scripts/common.sh@394 -- # pt= 00:06:12.894 15:06:11 -- scripts/common.sh@395 -- # return 1 00:06:12.894 15:06:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:12.894 1+0 records in 00:06:12.894 1+0 records out 00:06:12.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019357 s, 54.2 MB/s 00:06:12.894 15:06:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.894 15:06:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.894 15:06:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:12.894 15:06:11 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:12.894 15:06:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:13.154 No valid GPT data, bailing 00:06:13.154 15:06:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:13.154 15:06:11 -- scripts/common.sh@394 -- # pt= 00:06:13.154 15:06:11 -- scripts/common.sh@395 -- # return 1 00:06:13.154 15:06:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:13.154 1+0 records in 00:06:13.154 1+0 records out 00:06:13.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0060446 s, 173 MB/s 00:06:13.154 15:06:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.154 15:06:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.154 15:06:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:06:13.154 15:06:11 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:06:13.154 15:06:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:13.154 No valid GPT data, bailing 00:06:13.154 15:06:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:13.154 15:06:11 -- scripts/common.sh@394 -- # pt= 00:06:13.154 15:06:11 -- scripts/common.sh@395 -- # return 1 00:06:13.154 15:06:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:13.154 1+0 records in 00:06:13.154 1+0 records out 00:06:13.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522552 s, 201 MB/s 00:06:13.154 15:06:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.154 15:06:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.154 15:06:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:06:13.154 15:06:11 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:06:13.154 15:06:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:06:13.154 No valid GPT data, bailing 00:06:13.154 15:06:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:13.154 15:06:11 -- scripts/common.sh@394 -- # pt= 00:06:13.154 15:06:11 -- scripts/common.sh@395 -- # return 1 00:06:13.154 15:06:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:06:13.154 1+0 records in 00:06:13.154 1+0 records out 00:06:13.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569636 s, 184 MB/s 00:06:13.154 15:06:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.154 15:06:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.154 15:06:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:06:13.154 15:06:11 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:06:13.154 15:06:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:06:13.413 No valid GPT data, bailing 00:06:13.413 15:06:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:13.413 15:06:11 -- scripts/common.sh@394 -- # pt= 00:06:13.413 15:06:11 -- scripts/common.sh@395 -- # return 1 00:06:13.413 15:06:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:06:13.413 1+0 records in 00:06:13.413 1+0 records out 00:06:13.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526579 s, 199 MB/s 00:06:13.413 15:06:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.413 15:06:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.413 15:06:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:06:13.413 15:06:11 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:06:13.413 15:06:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:13.413 No valid GPT data, bailing 00:06:13.413 15:06:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:13.413 15:06:11 -- scripts/common.sh@394 -- # pt= 00:06:13.413 15:06:11 -- scripts/common.sh@395 -- # return 1 00:06:13.413 15:06:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:13.413 1+0 records in 00:06:13.413 1+0 records out 00:06:13.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421519 s, 249 MB/s 00:06:13.413 15:06:11 -- spdk/autotest.sh@105 -- # sync 00:06:13.413 15:06:11 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:13.413 15:06:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:13.413 15:06:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:16.705 15:06:15 -- spdk/autotest.sh@111 -- # uname -s 00:06:16.705 15:06:15 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:16.705 15:06:15 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:16.705 15:06:15 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:17.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.843 Hugepages 00:06:17.843 node hugesize free / total 00:06:17.843 node0 1048576kB 0 / 0 00:06:17.843 node0 2048kB 0 / 0 00:06:17.843 00:06:17.843 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:18.103 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:18.103 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:18.362 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:18.362 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:18.362 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:18.621 15:06:16 -- spdk/autotest.sh@117 -- # uname -s 00:06:18.621 15:06:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:18.621 15:06:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:18.621 15:06:16 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:19.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.128 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:20.128 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:20.128 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:20.128 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:20.128 15:06:18 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:21.064 15:06:19 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:21.064 15:06:19 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:21.064 15:06:19 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:21.064 15:06:19 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:21.064 15:06:19 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:21.064 15:06:19 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:21.064 15:06:19 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:21.064 15:06:19 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:21.064 15:06:19 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:21.343 15:06:19 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:06:21.343 15:06:19 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:21.343 15:06:19 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:21.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.188 Waiting for block devices as requested 00:06:22.188 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.446 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.446 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.446 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.720 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:27.720 15:06:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:27.720 15:06:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:27.720 15:06:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:27.720 15:06:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:27.720 15:06:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:27.720 15:06:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:27.720 15:06:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:27.720 15:06:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:27.720 15:06:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:27.720 15:06:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:27.720 15:06:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:27.720 15:06:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:27.720 15:06:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:27.720 15:06:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:27.720 15:06:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:27.720 15:06:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:27.720 15:06:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:27.720 15:06:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:27.720 15:06:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:27.720 15:06:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:27.720 15:06:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:27.720 15:06:26 -- common/autotest_common.sh@1541 -- # continue 00:06:27.720 15:06:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:27.720 15:06:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:27.720 15:06:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:27.720 15:06:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:27.720 15:06:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:27.720 15:06:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:27.720 15:06:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:27.720 15:06:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:27.720 15:06:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:27.720 15:06:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:27.720 15:06:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:27.720 15:06:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:27.720 15:06:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:27.720 15:06:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:27.720 15:06:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:27.720 15:06:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:27.720 15:06:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:27.720 15:06:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:27.721 15:06:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:27.721 15:06:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:27.721 15:06:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:27.721 15:06:26 -- common/autotest_common.sh@1541 -- # continue 00:06:27.721 15:06:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:27.721 15:06:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:27.721 15:06:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:06:27.721 15:06:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:27.721 15:06:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:27.721 15:06:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:27.721 15:06:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:27.721 15:06:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:06:27.721 15:06:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:06:27.721 15:06:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:06:27.721 15:06:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:06:27.721 15:06:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:27.721 15:06:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:27.721 15:06:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:27.721 15:06:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:27.721 15:06:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:27.721 15:06:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:06:27.721 15:06:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:27.721 15:06:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:27.721 15:06:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:27.721 15:06:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:27.721 15:06:26 -- common/autotest_common.sh@1541 -- # continue 00:06:27.721 15:06:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:27.721 15:06:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:27.721 15:06:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:27.721 15:06:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:06:27.721 15:06:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:27.721 15:06:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:27.721 15:06:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:27.721 15:06:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:06:27.721 15:06:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:06:27.721 15:06:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:06:27.980 15:06:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:06:27.980 15:06:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:27.980 15:06:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:27.980 15:06:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:27.980 15:06:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:27.980 15:06:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:27.980 15:06:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:06:27.980 15:06:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:27.980 15:06:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:27.980 15:06:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:27.980 15:06:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:27.980 15:06:26 -- common/autotest_common.sh@1541 -- # continue 00:06:27.980 15:06:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:27.980 15:06:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.980 15:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.980 15:06:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:27.980 15:06:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.980 15:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.980 15:06:26 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:28.547 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:29.484 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:29.484 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:29.484 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:29.484 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:29.484 15:06:27 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:29.484 15:06:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.484 15:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:29.484 15:06:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:29.484 15:06:28 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:29.484 15:06:28 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:29.484 15:06:28 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:29.484 15:06:28 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:29.484 15:06:28 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:29.484 15:06:28 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:29.484 15:06:28 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:29.484 15:06:28 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:29.484 15:06:28 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:29.484 15:06:28 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:29.484 15:06:28 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:29.484 15:06:28 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:29.744 15:06:28 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:06:29.744 15:06:28 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:29.744 15:06:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:29.744 15:06:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:29.744 15:06:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:29.744 15:06:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:29.744 15:06:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:29.744 15:06:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:29.744 15:06:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:29.744 15:06:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:29.744 15:06:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:29.744 15:06:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:29.744 15:06:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:29.744 15:06:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:29.744 15:06:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:29.744 15:06:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:29.744 15:06:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:29.744 15:06:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:29.744 15:06:28 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:29.744 15:06:28 -- common/autotest_common.sh@1570 -- # return 0 00:06:29.744 15:06:28 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:29.744 15:06:28 -- common/autotest_common.sh@1578 -- # return 0 00:06:29.744 15:06:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:29.744 15:06:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:29.744 15:06:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:29.744 15:06:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:29.744 15:06:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:29.744 15:06:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.744 15:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.744 15:06:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:29.744 15:06:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:29.744 15:06:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.744 15:06:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.744 15:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.744 ************************************ 00:06:29.744 START TEST env 00:06:29.744 ************************************ 00:06:29.744 15:06:28 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:30.004 * Looking for test storage... 00:06:30.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:30.004 15:06:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.004 15:06:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.004 15:06:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.004 15:06:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.004 15:06:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.004 15:06:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.004 15:06:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.004 15:06:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.004 15:06:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.004 15:06:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.004 15:06:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.004 15:06:28 env -- scripts/common.sh@344 -- # case "$op" in 00:06:30.004 15:06:28 env -- scripts/common.sh@345 -- # : 1 00:06:30.004 15:06:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.004 15:06:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.004 15:06:28 env -- scripts/common.sh@365 -- # decimal 1 00:06:30.004 15:06:28 env -- scripts/common.sh@353 -- # local d=1 00:06:30.004 15:06:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.004 15:06:28 env -- scripts/common.sh@355 -- # echo 1 00:06:30.004 15:06:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.004 15:06:28 env -- scripts/common.sh@366 -- # decimal 2 00:06:30.004 15:06:28 env -- scripts/common.sh@353 -- # local d=2 00:06:30.004 15:06:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.004 15:06:28 env -- scripts/common.sh@355 -- # echo 2 00:06:30.004 15:06:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.004 15:06:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.004 15:06:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.004 15:06:28 env -- scripts/common.sh@368 -- # return 0 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.004 --rc genhtml_branch_coverage=1 00:06:30.004 --rc genhtml_function_coverage=1 00:06:30.004 --rc genhtml_legend=1 00:06:30.004 --rc geninfo_all_blocks=1 00:06:30.004 --rc geninfo_unexecuted_blocks=1 00:06:30.004 00:06:30.004 ' 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.004 --rc genhtml_branch_coverage=1 00:06:30.004 --rc genhtml_function_coverage=1 00:06:30.004 --rc genhtml_legend=1 00:06:30.004 --rc geninfo_all_blocks=1 00:06:30.004 --rc geninfo_unexecuted_blocks=1 00:06:30.004 00:06:30.004 ' 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.004 --rc genhtml_branch_coverage=1 00:06:30.004 --rc genhtml_function_coverage=1 00:06:30.004 --rc genhtml_legend=1 00:06:30.004 --rc geninfo_all_blocks=1 00:06:30.004 --rc geninfo_unexecuted_blocks=1 00:06:30.004 00:06:30.004 ' 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.004 --rc genhtml_branch_coverage=1 00:06:30.004 --rc genhtml_function_coverage=1 00:06:30.004 --rc genhtml_legend=1 00:06:30.004 --rc geninfo_all_blocks=1 00:06:30.004 --rc geninfo_unexecuted_blocks=1 00:06:30.004 00:06:30.004 ' 00:06:30.004 15:06:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.004 15:06:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.004 15:06:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:30.004 ************************************ 00:06:30.004 START TEST env_memory 00:06:30.004 ************************************ 00:06:30.004 15:06:28 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:30.004 00:06:30.004 00:06:30.004 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.004 http://cunit.sourceforge.net/ 00:06:30.004 00:06:30.004 00:06:30.004 Suite: memory 00:06:30.004 Test: alloc and free memory map ...[2024-10-01 15:06:28.503790] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:30.004 passed 00:06:30.004 Test: mem map translation ...[2024-10-01 15:06:28.549353] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:30.004 [2024-10-01 15:06:28.549412] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:30.004 [2024-10-01 15:06:28.549480] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:30.004 [2024-10-01 15:06:28.549504] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:30.264 passed 00:06:30.264 Test: mem map registration ...[2024-10-01 15:06:28.618328] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:30.264 [2024-10-01 15:06:28.618394] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:30.264 passed 00:06:30.264 Test: mem map adjacent registrations ...passed 00:06:30.264 00:06:30.264 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.264 suites 1 1 n/a 0 0 00:06:30.264 tests 4 4 4 0 0 00:06:30.264 asserts 152 152 152 0 n/a 00:06:30.264 00:06:30.264 Elapsed time = 0.246 seconds 00:06:30.264 00:06:30.264 real 0m0.298s 00:06:30.264 user 0m0.261s 00:06:30.264 sys 0m0.026s 00:06:30.264 15:06:28 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.264 15:06:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:30.264 ************************************ 00:06:30.264 END TEST env_memory 00:06:30.264 ************************************ 00:06:30.264 15:06:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:30.264 15:06:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.264 15:06:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.264 15:06:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:30.264 ************************************ 00:06:30.264 START TEST env_vtophys 00:06:30.264 ************************************ 00:06:30.264 15:06:28 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:30.528 EAL: lib.eal log level changed from notice to debug 00:06:30.528 EAL: Detected lcore 0 as core 0 on socket 0 00:06:30.528 EAL: Detected lcore 1 as core 0 on socket 0 00:06:30.528 EAL: Detected lcore 2 as core 0 on socket 0 00:06:30.528 EAL: Detected lcore 3 as core 0 on socket 0 00:06:30.528 EAL: Detected lcore 4 as core 0 on socket 0 00:06:30.528 EAL: Detected lcore 5 as core 0 on socket 0 00:06:30.528 EAL: Detected lcore 6 as core 0 on socket 0 00:06:30.528 EAL: Detected lcore 7 as core 0 on socket 0 00:06:30.528 EAL: Detected lcore 8 as core 0 on socket 0 00:06:30.528 EAL: Detected lcore 9 as core 0 on socket 0 00:06:30.528 EAL: Maximum logical cores by configuration: 128 00:06:30.528 EAL: Detected CPU lcores: 10 00:06:30.528 EAL: Detected NUMA nodes: 1 00:06:30.528 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:30.528 EAL: Detected shared linkage of DPDK 00:06:30.528 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:30.528 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:30.528 EAL: Registered [vdev] bus. 00:06:30.528 EAL: bus.vdev log level changed from disabled to notice 00:06:30.528 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:30.528 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:30.528 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:30.528 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:30.528 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:30.528 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:30.528 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:30.528 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:30.528 EAL: No shared files mode enabled, IPC will be disabled 00:06:30.528 EAL: No shared files mode enabled, IPC is disabled 00:06:30.528 EAL: Selected IOVA mode 'PA' 00:06:30.528 EAL: Probing VFIO support... 00:06:30.528 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:30.528 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:30.528 EAL: Ask a virtual area of 0x2e000 bytes 00:06:30.528 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:30.528 EAL: Setting up physically contiguous memory... 00:06:30.528 EAL: Setting maximum number of open files to 524288 00:06:30.528 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:30.528 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:30.528 EAL: Ask a virtual area of 0x61000 bytes 00:06:30.528 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:30.528 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:30.528 EAL: Ask a virtual area of 0x400000000 bytes 00:06:30.528 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:30.528 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:30.528 EAL: Ask a virtual area of 0x61000 bytes 00:06:30.528 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:30.528 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:30.528 EAL: Ask a virtual area of 0x400000000 bytes 00:06:30.528 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:30.528 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:30.528 EAL: Ask a virtual area of 0x61000 bytes 00:06:30.528 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:30.528 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:30.528 EAL: Ask a virtual area of 0x400000000 bytes 00:06:30.528 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:30.528 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:30.528 EAL: Ask a virtual area of 0x61000 bytes 00:06:30.528 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:30.528 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:30.528 EAL: Ask a virtual area of 0x400000000 bytes 00:06:30.528 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:30.528 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:30.528 EAL: Hugepages will be freed exactly as allocated. 00:06:30.528 EAL: No shared files mode enabled, IPC is disabled 00:06:30.528 EAL: No shared files mode enabled, IPC is disabled 00:06:30.528 EAL: TSC frequency is ~2490000 KHz 00:06:30.528 EAL: Main lcore 0 is ready (tid=7fa70de39a40;cpuset=[0]) 00:06:30.528 EAL: Trying to obtain current memory policy. 00:06:30.528 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.528 EAL: Restoring previous memory policy: 0 00:06:30.528 EAL: request: mp_malloc_sync 00:06:30.528 EAL: No shared files mode enabled, IPC is disabled 00:06:30.528 EAL: Heap on socket 0 was expanded by 2MB 00:06:30.528 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:30.528 EAL: No shared files mode enabled, IPC is disabled 00:06:30.528 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:30.528 EAL: Mem event callback 'spdk:(nil)' registered 00:06:30.528 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:30.528 00:06:30.528 00:06:30.528 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.528 http://cunit.sourceforge.net/ 00:06:30.528 00:06:30.528 00:06:30.528 Suite: components_suite 00:06:31.098 Test: vtophys_malloc_test ...passed 00:06:31.098 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:31.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.098 EAL: Restoring previous memory policy: 4 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was expanded by 4MB 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was shrunk by 4MB 00:06:31.098 EAL: Trying to obtain current memory policy. 00:06:31.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.098 EAL: Restoring previous memory policy: 4 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was expanded by 6MB 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was shrunk by 6MB 00:06:31.098 EAL: Trying to obtain current memory policy. 00:06:31.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.098 EAL: Restoring previous memory policy: 4 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was expanded by 10MB 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was shrunk by 10MB 00:06:31.098 EAL: Trying to obtain current memory policy. 00:06:31.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.098 EAL: Restoring previous memory policy: 4 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was expanded by 18MB 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was shrunk by 18MB 00:06:31.098 EAL: Trying to obtain current memory policy. 00:06:31.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.098 EAL: Restoring previous memory policy: 4 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was expanded by 34MB 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was shrunk by 34MB 00:06:31.098 EAL: Trying to obtain current memory policy. 00:06:31.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.098 EAL: Restoring previous memory policy: 4 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was expanded by 66MB 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was shrunk by 66MB 00:06:31.098 EAL: Trying to obtain current memory policy. 00:06:31.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.098 EAL: Restoring previous memory policy: 4 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was expanded by 130MB 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was shrunk by 130MB 00:06:31.098 EAL: Trying to obtain current memory policy. 00:06:31.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.098 EAL: Restoring previous memory policy: 4 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.098 EAL: request: mp_malloc_sync 00:06:31.098 EAL: No shared files mode enabled, IPC is disabled 00:06:31.098 EAL: Heap on socket 0 was expanded by 258MB 00:06:31.098 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.391 EAL: request: mp_malloc_sync 00:06:31.391 EAL: No shared files mode enabled, IPC is disabled 00:06:31.391 EAL: Heap on socket 0 was shrunk by 258MB 00:06:31.391 EAL: Trying to obtain current memory policy. 00:06:31.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.391 EAL: Restoring previous memory policy: 4 00:06:31.391 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.391 EAL: request: mp_malloc_sync 00:06:31.391 EAL: No shared files mode enabled, IPC is disabled 00:06:31.391 EAL: Heap on socket 0 was expanded by 514MB 00:06:31.391 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.659 EAL: request: mp_malloc_sync 00:06:31.659 EAL: No shared files mode enabled, IPC is disabled 00:06:31.659 EAL: Heap on socket 0 was shrunk by 514MB 00:06:31.659 EAL: Trying to obtain current memory policy. 00:06:31.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.659 EAL: Restoring previous memory policy: 4 00:06:31.659 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.659 EAL: request: mp_malloc_sync 00:06:31.659 EAL: No shared files mode enabled, IPC is disabled 00:06:31.659 EAL: Heap on socket 0 was expanded by 1026MB 00:06:31.918 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.178 passed 00:06:32.178 00:06:32.178 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.178 suites 1 1 n/a 0 0 00:06:32.178 tests 2 2 2 0 0 00:06:32.178 asserts 5316 5316 5316 0 n/a 00:06:32.178 00:06:32.178 Elapsed time = 1.454 secondsEAL: request: mp_malloc_sync 00:06:32.178 EAL: No shared files mode enabled, IPC is disabled 00:06:32.178 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:32.178 00:06:32.178 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.178 EAL: request: mp_malloc_sync 00:06:32.178 EAL: No shared files mode enabled, IPC is disabled 00:06:32.178 EAL: Heap on socket 0 was shrunk by 2MB 00:06:32.178 EAL: No shared files mode enabled, IPC is disabled 00:06:32.178 EAL: No shared files mode enabled, IPC is disabled 00:06:32.178 EAL: No shared files mode enabled, IPC is disabled 00:06:32.178 00:06:32.178 real 0m1.720s 00:06:32.178 user 0m0.838s 00:06:32.178 sys 0m0.752s 00:06:32.178 15:06:30 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.178 15:06:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:32.178 ************************************ 00:06:32.178 END TEST env_vtophys 00:06:32.178 ************************************ 00:06:32.178 15:06:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:32.178 15:06:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.178 15:06:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.178 15:06:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.178 ************************************ 00:06:32.178 START TEST env_pci 00:06:32.178 ************************************ 00:06:32.178 15:06:30 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:32.178 00:06:32.178 00:06:32.178 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.178 http://cunit.sourceforge.net/ 00:06:32.178 00:06:32.178 00:06:32.178 Suite: pci 00:06:32.178 Test: pci_hook ...[2024-10-01 15:06:30.623997] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70524 has claimed it 00:06:32.178 passed 00:06:32.178 00:06:32.178 EAL: Cannot find device (10000:00:01.0) 00:06:32.178 EAL: Failed to attach device on primary process 00:06:32.178 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.178 suites 1 1 n/a 0 0 00:06:32.178 tests 1 1 1 0 0 00:06:32.178 asserts 25 25 25 0 n/a 00:06:32.178 00:06:32.178 Elapsed time = 0.008 seconds 00:06:32.178 00:06:32.178 real 0m0.102s 00:06:32.178 user 0m0.038s 00:06:32.178 sys 0m0.063s 00:06:32.178 15:06:30 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.178 15:06:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:32.178 ************************************ 00:06:32.178 END TEST env_pci 00:06:32.178 ************************************ 00:06:32.437 15:06:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:32.437 15:06:30 env -- env/env.sh@15 -- # uname 00:06:32.437 15:06:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:32.437 15:06:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:32.437 15:06:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.437 15:06:30 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:32.437 15:06:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.437 15:06:30 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.437 ************************************ 00:06:32.437 START TEST env_dpdk_post_init 00:06:32.437 ************************************ 00:06:32.437 15:06:30 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.437 EAL: Detected CPU lcores: 10 00:06:32.437 EAL: Detected NUMA nodes: 1 00:06:32.437 EAL: Detected shared linkage of DPDK 00:06:32.437 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.437 EAL: Selected IOVA mode 'PA' 00:06:32.437 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:32.697 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:32.697 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:32.697 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:32.697 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:32.697 Starting DPDK initialization... 00:06:32.697 Starting SPDK post initialization... 00:06:32.697 SPDK NVMe probe 00:06:32.697 Attaching to 0000:00:10.0 00:06:32.697 Attaching to 0000:00:11.0 00:06:32.697 Attaching to 0000:00:12.0 00:06:32.697 Attaching to 0000:00:13.0 00:06:32.697 Attached to 0000:00:10.0 00:06:32.697 Attached to 0000:00:11.0 00:06:32.697 Attached to 0000:00:13.0 00:06:32.697 Attached to 0000:00:12.0 00:06:32.697 Cleaning up... 00:06:32.697 00:06:32.697 real 0m0.279s 00:06:32.697 user 0m0.085s 00:06:32.697 sys 0m0.097s 00:06:32.697 15:06:31 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.697 15:06:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:32.697 ************************************ 00:06:32.697 END TEST env_dpdk_post_init 00:06:32.697 ************************************ 00:06:32.697 15:06:31 env -- env/env.sh@26 -- # uname 00:06:32.697 15:06:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:32.697 15:06:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:32.697 15:06:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.697 15:06:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.697 15:06:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.697 ************************************ 00:06:32.697 START TEST env_mem_callbacks 00:06:32.697 ************************************ 00:06:32.697 15:06:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:32.697 EAL: Detected CPU lcores: 10 00:06:32.697 EAL: Detected NUMA nodes: 1 00:06:32.697 EAL: Detected shared linkage of DPDK 00:06:32.697 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.697 EAL: Selected IOVA mode 'PA' 00:06:32.955 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:32.955 00:06:32.955 00:06:32.955 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.955 http://cunit.sourceforge.net/ 00:06:32.955 00:06:32.955 00:06:32.955 Suite: memory 00:06:32.955 Test: test ... 00:06:32.956 register 0x200000200000 2097152 00:06:32.956 malloc 3145728 00:06:32.956 register 0x200000400000 4194304 00:06:32.956 buf 0x200000500000 len 3145728 PASSED 00:06:32.956 malloc 64 00:06:32.956 buf 0x2000004fff40 len 64 PASSED 00:06:32.956 malloc 4194304 00:06:32.956 register 0x200000800000 6291456 00:06:32.956 buf 0x200000a00000 len 4194304 PASSED 00:06:32.956 free 0x200000500000 3145728 00:06:32.956 free 0x2000004fff40 64 00:06:32.956 unregister 0x200000400000 4194304 PASSED 00:06:32.956 free 0x200000a00000 4194304 00:06:32.956 unregister 0x200000800000 6291456 PASSED 00:06:32.956 malloc 8388608 00:06:32.956 register 0x200000400000 10485760 00:06:32.956 buf 0x200000600000 len 8388608 PASSED 00:06:32.956 free 0x200000600000 8388608 00:06:32.956 unregister 0x200000400000 10485760 PASSED 00:06:32.956 passed 00:06:32.956 00:06:32.956 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.956 suites 1 1 n/a 0 0 00:06:32.956 tests 1 1 1 0 0 00:06:32.956 asserts 15 15 15 0 n/a 00:06:32.956 00:06:32.956 Elapsed time = 0.011 seconds 00:06:32.956 00:06:32.956 real 0m0.220s 00:06:32.956 user 0m0.038s 00:06:32.956 sys 0m0.081s 00:06:32.956 15:06:31 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.956 ************************************ 00:06:32.956 15:06:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:32.956 END TEST env_mem_callbacks 00:06:32.956 ************************************ 00:06:32.956 00:06:32.956 real 0m3.205s 00:06:32.956 user 0m1.479s 00:06:32.956 sys 0m1.383s 00:06:32.956 15:06:31 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.956 15:06:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.956 ************************************ 00:06:32.956 END TEST env 00:06:32.956 ************************************ 00:06:32.956 15:06:31 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:32.956 15:06:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.956 15:06:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.956 15:06:31 -- common/autotest_common.sh@10 -- # set +x 00:06:32.956 ************************************ 00:06:32.956 START TEST rpc 00:06:32.956 ************************************ 00:06:32.956 15:06:31 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:33.214 * Looking for test storage... 00:06:33.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:33.214 15:06:31 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:33.214 15:06:31 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:33.214 15:06:31 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:33.214 15:06:31 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:33.214 15:06:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.214 15:06:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.214 15:06:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.214 15:06:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.214 15:06:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.214 15:06:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.214 15:06:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.214 15:06:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.214 15:06:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.214 15:06:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.214 15:06:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.214 15:06:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:33.214 15:06:31 rpc -- scripts/common.sh@345 -- # : 1 00:06:33.214 15:06:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.214 15:06:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.214 15:06:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:33.214 15:06:31 rpc -- scripts/common.sh@353 -- # local d=1 00:06:33.214 15:06:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.214 15:06:31 rpc -- scripts/common.sh@355 -- # echo 1 00:06:33.214 15:06:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.214 15:06:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:33.214 15:06:31 rpc -- scripts/common.sh@353 -- # local d=2 00:06:33.214 15:06:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.214 15:06:31 rpc -- scripts/common.sh@355 -- # echo 2 00:06:33.214 15:06:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.214 15:06:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.215 15:06:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.215 15:06:31 rpc -- scripts/common.sh@368 -- # return 0 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:33.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.215 --rc genhtml_branch_coverage=1 00:06:33.215 --rc genhtml_function_coverage=1 00:06:33.215 --rc genhtml_legend=1 00:06:33.215 --rc geninfo_all_blocks=1 00:06:33.215 --rc geninfo_unexecuted_blocks=1 00:06:33.215 00:06:33.215 ' 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:33.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.215 --rc genhtml_branch_coverage=1 00:06:33.215 --rc genhtml_function_coverage=1 00:06:33.215 --rc genhtml_legend=1 00:06:33.215 --rc geninfo_all_blocks=1 00:06:33.215 --rc geninfo_unexecuted_blocks=1 00:06:33.215 00:06:33.215 ' 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:33.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.215 --rc genhtml_branch_coverage=1 00:06:33.215 --rc genhtml_function_coverage=1 00:06:33.215 --rc genhtml_legend=1 00:06:33.215 --rc geninfo_all_blocks=1 00:06:33.215 --rc geninfo_unexecuted_blocks=1 00:06:33.215 00:06:33.215 ' 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:33.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.215 --rc genhtml_branch_coverage=1 00:06:33.215 --rc genhtml_function_coverage=1 00:06:33.215 --rc genhtml_legend=1 00:06:33.215 --rc geninfo_all_blocks=1 00:06:33.215 --rc geninfo_unexecuted_blocks=1 00:06:33.215 00:06:33.215 ' 00:06:33.215 15:06:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70651 00:06:33.215 15:06:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.215 15:06:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:33.215 15:06:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70651 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@831 -- # '[' -z 70651 ']' 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.215 15:06:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.473 [2024-10-01 15:06:31.819473] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:33.473 [2024-10-01 15:06:31.819607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70651 ] 00:06:33.473 [2024-10-01 15:06:31.978438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.732 [2024-10-01 15:06:32.022209] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:33.732 [2024-10-01 15:06:32.022270] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70651' to capture a snapshot of events at runtime. 00:06:33.732 [2024-10-01 15:06:32.022286] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.732 [2024-10-01 15:06:32.022304] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.732 [2024-10-01 15:06:32.022321] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70651 for offline analysis/debug. 00:06:33.732 [2024-10-01 15:06:32.022374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.300 15:06:32 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.300 15:06:32 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:34.300 15:06:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.300 15:06:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.300 15:06:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:34.300 15:06:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:34.300 15:06:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.300 15:06:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.300 15:06:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.300 ************************************ 00:06:34.300 START TEST rpc_integrity 00:06:34.300 ************************************ 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:34.300 { 00:06:34.300 "name": "Malloc0", 00:06:34.300 "aliases": [ 00:06:34.300 "26173091-0111-4b0e-b750-f53d980e1963" 00:06:34.300 ], 00:06:34.300 "product_name": "Malloc disk", 00:06:34.300 "block_size": 512, 00:06:34.300 "num_blocks": 16384, 00:06:34.300 "uuid": "26173091-0111-4b0e-b750-f53d980e1963", 00:06:34.300 "assigned_rate_limits": { 00:06:34.300 "rw_ios_per_sec": 0, 00:06:34.300 "rw_mbytes_per_sec": 0, 00:06:34.300 "r_mbytes_per_sec": 0, 00:06:34.300 "w_mbytes_per_sec": 0 00:06:34.300 }, 00:06:34.300 "claimed": false, 00:06:34.300 "zoned": false, 00:06:34.300 "supported_io_types": { 00:06:34.300 "read": true, 00:06:34.300 "write": true, 00:06:34.300 "unmap": true, 00:06:34.300 "flush": true, 00:06:34.300 "reset": true, 00:06:34.300 "nvme_admin": false, 00:06:34.300 "nvme_io": false, 00:06:34.300 "nvme_io_md": false, 00:06:34.300 "write_zeroes": true, 00:06:34.300 "zcopy": true, 00:06:34.300 "get_zone_info": false, 00:06:34.300 "zone_management": false, 00:06:34.300 "zone_append": false, 00:06:34.300 "compare": false, 00:06:34.300 "compare_and_write": false, 00:06:34.300 "abort": true, 00:06:34.300 "seek_hole": false, 00:06:34.300 "seek_data": false, 00:06:34.300 "copy": true, 00:06:34.300 "nvme_iov_md": false 00:06:34.300 }, 00:06:34.300 "memory_domains": [ 00:06:34.300 { 00:06:34.300 "dma_device_id": "system", 00:06:34.300 "dma_device_type": 1 00:06:34.300 }, 00:06:34.300 { 00:06:34.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.300 "dma_device_type": 2 00:06:34.300 } 00:06:34.300 ], 00:06:34.300 "driver_specific": {} 00:06:34.300 } 00:06:34.300 ]' 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.300 [2024-10-01 15:06:32.801462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:34.300 [2024-10-01 15:06:32.801529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.300 [2024-10-01 15:06:32.801559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:34.300 [2024-10-01 15:06:32.801572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.300 [2024-10-01 15:06:32.804125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.300 [2024-10-01 15:06:32.804180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:34.300 Passthru0 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.300 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.300 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:34.300 { 00:06:34.300 "name": "Malloc0", 00:06:34.300 "aliases": [ 00:06:34.300 "26173091-0111-4b0e-b750-f53d980e1963" 00:06:34.300 ], 00:06:34.300 "product_name": "Malloc disk", 00:06:34.300 "block_size": 512, 00:06:34.300 "num_blocks": 16384, 00:06:34.300 "uuid": "26173091-0111-4b0e-b750-f53d980e1963", 00:06:34.300 "assigned_rate_limits": { 00:06:34.300 "rw_ios_per_sec": 0, 00:06:34.300 "rw_mbytes_per_sec": 0, 00:06:34.300 "r_mbytes_per_sec": 0, 00:06:34.300 "w_mbytes_per_sec": 0 00:06:34.300 }, 00:06:34.300 "claimed": true, 00:06:34.300 "claim_type": "exclusive_write", 00:06:34.300 "zoned": false, 00:06:34.300 "supported_io_types": { 00:06:34.300 "read": true, 00:06:34.300 "write": true, 00:06:34.300 "unmap": true, 00:06:34.300 "flush": true, 00:06:34.300 "reset": true, 00:06:34.300 "nvme_admin": false, 00:06:34.300 "nvme_io": false, 00:06:34.300 "nvme_io_md": false, 00:06:34.300 "write_zeroes": true, 00:06:34.300 "zcopy": true, 00:06:34.300 "get_zone_info": false, 00:06:34.300 "zone_management": false, 00:06:34.300 "zone_append": false, 00:06:34.300 "compare": false, 00:06:34.300 "compare_and_write": false, 00:06:34.300 "abort": true, 00:06:34.300 "seek_hole": false, 00:06:34.300 "seek_data": false, 00:06:34.300 "copy": true, 00:06:34.300 "nvme_iov_md": false 00:06:34.300 }, 00:06:34.300 "memory_domains": [ 00:06:34.300 { 00:06:34.300 "dma_device_id": "system", 00:06:34.300 "dma_device_type": 1 00:06:34.300 }, 00:06:34.300 { 00:06:34.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.300 "dma_device_type": 2 00:06:34.300 } 00:06:34.300 ], 00:06:34.300 "driver_specific": {} 00:06:34.300 }, 00:06:34.300 { 00:06:34.300 "name": "Passthru0", 00:06:34.300 "aliases": [ 00:06:34.300 "a8d1aebc-e0c0-5eb4-ab4e-fe656e70326d" 00:06:34.300 ], 00:06:34.300 "product_name": "passthru", 00:06:34.300 "block_size": 512, 00:06:34.300 "num_blocks": 16384, 00:06:34.300 "uuid": "a8d1aebc-e0c0-5eb4-ab4e-fe656e70326d", 00:06:34.300 "assigned_rate_limits": { 00:06:34.300 "rw_ios_per_sec": 0, 00:06:34.300 "rw_mbytes_per_sec": 0, 00:06:34.300 "r_mbytes_per_sec": 0, 00:06:34.300 "w_mbytes_per_sec": 0 00:06:34.300 }, 00:06:34.300 "claimed": false, 00:06:34.300 "zoned": false, 00:06:34.300 "supported_io_types": { 00:06:34.300 "read": true, 00:06:34.300 "write": true, 00:06:34.300 "unmap": true, 00:06:34.300 "flush": true, 00:06:34.300 "reset": true, 00:06:34.300 "nvme_admin": false, 00:06:34.301 "nvme_io": false, 00:06:34.301 "nvme_io_md": false, 00:06:34.301 "write_zeroes": true, 00:06:34.301 "zcopy": true, 00:06:34.301 "get_zone_info": false, 00:06:34.301 "zone_management": false, 00:06:34.301 "zone_append": false, 00:06:34.301 "compare": false, 00:06:34.301 "compare_and_write": false, 00:06:34.301 "abort": true, 00:06:34.301 "seek_hole": false, 00:06:34.301 "seek_data": false, 00:06:34.301 "copy": true, 00:06:34.301 "nvme_iov_md": false 00:06:34.301 }, 00:06:34.301 "memory_domains": [ 00:06:34.301 { 00:06:34.301 "dma_device_id": "system", 00:06:34.301 "dma_device_type": 1 00:06:34.301 }, 00:06:34.301 { 00:06:34.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.301 "dma_device_type": 2 00:06:34.301 } 00:06:34.301 ], 00:06:34.301 "driver_specific": { 00:06:34.301 "passthru": { 00:06:34.301 "name": "Passthru0", 00:06:34.301 "base_bdev_name": "Malloc0" 00:06:34.301 } 00:06:34.301 } 00:06:34.301 } 00:06:34.301 ]' 00:06:34.301 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:34.559 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:34.559 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:34.559 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.559 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.559 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.559 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:34.559 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.559 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.559 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.560 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:34.560 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.560 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.560 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.560 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:34.560 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:34.560 15:06:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:34.560 00:06:34.560 real 0m0.305s 00:06:34.560 user 0m0.166s 00:06:34.560 sys 0m0.058s 00:06:34.560 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.560 15:06:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.560 ************************************ 00:06:34.560 END TEST rpc_integrity 00:06:34.560 ************************************ 00:06:34.560 15:06:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:34.560 15:06:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.560 15:06:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.560 15:06:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.560 ************************************ 00:06:34.560 START TEST rpc_plugins 00:06:34.560 ************************************ 00:06:34.560 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:34.560 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:34.560 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.560 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.560 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.560 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:34.560 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:34.560 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.560 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.560 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.560 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:34.560 { 00:06:34.560 "name": "Malloc1", 00:06:34.560 "aliases": [ 00:06:34.560 "33d4b582-b320-43df-abda-831cedef96ed" 00:06:34.560 ], 00:06:34.560 "product_name": "Malloc disk", 00:06:34.560 "block_size": 4096, 00:06:34.560 "num_blocks": 256, 00:06:34.560 "uuid": "33d4b582-b320-43df-abda-831cedef96ed", 00:06:34.560 "assigned_rate_limits": { 00:06:34.560 "rw_ios_per_sec": 0, 00:06:34.560 "rw_mbytes_per_sec": 0, 00:06:34.560 "r_mbytes_per_sec": 0, 00:06:34.560 "w_mbytes_per_sec": 0 00:06:34.560 }, 00:06:34.560 "claimed": false, 00:06:34.560 "zoned": false, 00:06:34.560 "supported_io_types": { 00:06:34.560 "read": true, 00:06:34.560 "write": true, 00:06:34.560 "unmap": true, 00:06:34.560 "flush": true, 00:06:34.560 "reset": true, 00:06:34.560 "nvme_admin": false, 00:06:34.560 "nvme_io": false, 00:06:34.560 "nvme_io_md": false, 00:06:34.560 "write_zeroes": true, 00:06:34.560 "zcopy": true, 00:06:34.560 "get_zone_info": false, 00:06:34.560 "zone_management": false, 00:06:34.560 "zone_append": false, 00:06:34.560 "compare": false, 00:06:34.560 "compare_and_write": false, 00:06:34.560 "abort": true, 00:06:34.560 "seek_hole": false, 00:06:34.560 "seek_data": false, 00:06:34.560 "copy": true, 00:06:34.560 "nvme_iov_md": false 00:06:34.560 }, 00:06:34.560 "memory_domains": [ 00:06:34.560 { 00:06:34.560 "dma_device_id": "system", 00:06:34.560 "dma_device_type": 1 00:06:34.560 }, 00:06:34.560 { 00:06:34.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.560 "dma_device_type": 2 00:06:34.560 } 00:06:34.560 ], 00:06:34.560 "driver_specific": {} 00:06:34.560 } 00:06:34.560 ]' 00:06:34.560 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:34.819 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:34.819 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:34.819 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.819 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.819 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.819 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:34.819 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.819 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.819 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.819 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:34.819 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:34.819 15:06:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:34.819 00:06:34.819 real 0m0.173s 00:06:34.819 user 0m0.102s 00:06:34.820 sys 0m0.027s 00:06:34.820 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.820 15:06:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.820 ************************************ 00:06:34.820 END TEST rpc_plugins 00:06:34.820 ************************************ 00:06:34.820 15:06:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:34.820 15:06:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.820 15:06:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.820 15:06:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.820 ************************************ 00:06:34.820 START TEST rpc_trace_cmd_test 00:06:34.820 ************************************ 00:06:34.820 15:06:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:34.820 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:34.820 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:34.820 15:06:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.820 15:06:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.820 15:06:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.820 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:34.820 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70651", 00:06:34.820 "tpoint_group_mask": "0x8", 00:06:34.820 "iscsi_conn": { 00:06:34.820 "mask": "0x2", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "scsi": { 00:06:34.820 "mask": "0x4", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "bdev": { 00:06:34.820 "mask": "0x8", 00:06:34.820 "tpoint_mask": "0xffffffffffffffff" 00:06:34.820 }, 00:06:34.820 "nvmf_rdma": { 00:06:34.820 "mask": "0x10", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "nvmf_tcp": { 00:06:34.820 "mask": "0x20", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "ftl": { 00:06:34.820 "mask": "0x40", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "blobfs": { 00:06:34.820 "mask": "0x80", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "dsa": { 00:06:34.820 "mask": "0x200", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "thread": { 00:06:34.820 "mask": "0x400", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "nvme_pcie": { 00:06:34.820 "mask": "0x800", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "iaa": { 00:06:34.820 "mask": "0x1000", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "nvme_tcp": { 00:06:34.820 "mask": "0x2000", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "bdev_nvme": { 00:06:34.820 "mask": "0x4000", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "sock": { 00:06:34.820 "mask": "0x8000", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "blob": { 00:06:34.820 "mask": "0x10000", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 }, 00:06:34.820 "bdev_raid": { 00:06:34.820 "mask": "0x20000", 00:06:34.820 "tpoint_mask": "0x0" 00:06:34.820 } 00:06:34.820 }' 00:06:34.820 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:34.820 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:34.820 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:35.079 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:35.079 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:35.079 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:35.079 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:35.079 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:35.079 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:35.079 15:06:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:35.079 00:06:35.079 real 0m0.256s 00:06:35.079 user 0m0.192s 00:06:35.079 sys 0m0.049s 00:06:35.079 15:06:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.079 ************************************ 00:06:35.079 END TEST rpc_trace_cmd_test 00:06:35.079 ************************************ 00:06:35.079 15:06:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.079 15:06:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:35.079 15:06:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:35.079 15:06:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:35.079 15:06:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.079 15:06:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.079 15:06:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.079 ************************************ 00:06:35.079 START TEST rpc_daemon_integrity 00:06:35.079 ************************************ 00:06:35.079 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:35.079 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:35.079 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.079 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.079 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.079 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:35.079 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:35.338 { 00:06:35.338 "name": "Malloc2", 00:06:35.338 "aliases": [ 00:06:35.338 "142137b6-bd6b-4f7a-8cfb-6fd2582486d9" 00:06:35.338 ], 00:06:35.338 "product_name": "Malloc disk", 00:06:35.338 "block_size": 512, 00:06:35.338 "num_blocks": 16384, 00:06:35.338 "uuid": "142137b6-bd6b-4f7a-8cfb-6fd2582486d9", 00:06:35.338 "assigned_rate_limits": { 00:06:35.338 "rw_ios_per_sec": 0, 00:06:35.338 "rw_mbytes_per_sec": 0, 00:06:35.338 "r_mbytes_per_sec": 0, 00:06:35.338 "w_mbytes_per_sec": 0 00:06:35.338 }, 00:06:35.338 "claimed": false, 00:06:35.338 "zoned": false, 00:06:35.338 "supported_io_types": { 00:06:35.338 "read": true, 00:06:35.338 "write": true, 00:06:35.338 "unmap": true, 00:06:35.338 "flush": true, 00:06:35.338 "reset": true, 00:06:35.338 "nvme_admin": false, 00:06:35.338 "nvme_io": false, 00:06:35.338 "nvme_io_md": false, 00:06:35.338 "write_zeroes": true, 00:06:35.338 "zcopy": true, 00:06:35.338 "get_zone_info": false, 00:06:35.338 "zone_management": false, 00:06:35.338 "zone_append": false, 00:06:35.338 "compare": false, 00:06:35.338 "compare_and_write": false, 00:06:35.338 "abort": true, 00:06:35.338 "seek_hole": false, 00:06:35.338 "seek_data": false, 00:06:35.338 "copy": true, 00:06:35.338 "nvme_iov_md": false 00:06:35.338 }, 00:06:35.338 "memory_domains": [ 00:06:35.338 { 00:06:35.338 "dma_device_id": "system", 00:06:35.338 "dma_device_type": 1 00:06:35.338 }, 00:06:35.338 { 00:06:35.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.338 "dma_device_type": 2 00:06:35.338 } 00:06:35.338 ], 00:06:35.338 "driver_specific": {} 00:06:35.338 } 00:06:35.338 ]' 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 [2024-10-01 15:06:33.733043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:35.338 [2024-10-01 15:06:33.733109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:35.338 [2024-10-01 15:06:33.733134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:35.338 [2024-10-01 15:06:33.733146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:35.338 [2024-10-01 15:06:33.735696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:35.338 [2024-10-01 15:06:33.735736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:35.338 Passthru0 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:35.338 { 00:06:35.338 "name": "Malloc2", 00:06:35.338 "aliases": [ 00:06:35.338 "142137b6-bd6b-4f7a-8cfb-6fd2582486d9" 00:06:35.338 ], 00:06:35.338 "product_name": "Malloc disk", 00:06:35.338 "block_size": 512, 00:06:35.338 "num_blocks": 16384, 00:06:35.338 "uuid": "142137b6-bd6b-4f7a-8cfb-6fd2582486d9", 00:06:35.338 "assigned_rate_limits": { 00:06:35.338 "rw_ios_per_sec": 0, 00:06:35.338 "rw_mbytes_per_sec": 0, 00:06:35.338 "r_mbytes_per_sec": 0, 00:06:35.338 "w_mbytes_per_sec": 0 00:06:35.338 }, 00:06:35.338 "claimed": true, 00:06:35.338 "claim_type": "exclusive_write", 00:06:35.338 "zoned": false, 00:06:35.338 "supported_io_types": { 00:06:35.338 "read": true, 00:06:35.338 "write": true, 00:06:35.338 "unmap": true, 00:06:35.338 "flush": true, 00:06:35.338 "reset": true, 00:06:35.338 "nvme_admin": false, 00:06:35.338 "nvme_io": false, 00:06:35.338 "nvme_io_md": false, 00:06:35.338 "write_zeroes": true, 00:06:35.338 "zcopy": true, 00:06:35.338 "get_zone_info": false, 00:06:35.338 "zone_management": false, 00:06:35.338 "zone_append": false, 00:06:35.338 "compare": false, 00:06:35.338 "compare_and_write": false, 00:06:35.338 "abort": true, 00:06:35.338 "seek_hole": false, 00:06:35.338 "seek_data": false, 00:06:35.338 "copy": true, 00:06:35.338 "nvme_iov_md": false 00:06:35.338 }, 00:06:35.338 "memory_domains": [ 00:06:35.338 { 00:06:35.338 "dma_device_id": "system", 00:06:35.338 "dma_device_type": 1 00:06:35.338 }, 00:06:35.338 { 00:06:35.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.338 "dma_device_type": 2 00:06:35.338 } 00:06:35.338 ], 00:06:35.338 "driver_specific": {} 00:06:35.338 }, 00:06:35.338 { 00:06:35.338 "name": "Passthru0", 00:06:35.338 "aliases": [ 00:06:35.338 "1e4ea8a4-002d-56cb-a8a5-8cc97460afb5" 00:06:35.338 ], 00:06:35.338 "product_name": "passthru", 00:06:35.338 "block_size": 512, 00:06:35.338 "num_blocks": 16384, 00:06:35.338 "uuid": "1e4ea8a4-002d-56cb-a8a5-8cc97460afb5", 00:06:35.338 "assigned_rate_limits": { 00:06:35.338 "rw_ios_per_sec": 0, 00:06:35.338 "rw_mbytes_per_sec": 0, 00:06:35.338 "r_mbytes_per_sec": 0, 00:06:35.338 "w_mbytes_per_sec": 0 00:06:35.338 }, 00:06:35.338 "claimed": false, 00:06:35.338 "zoned": false, 00:06:35.338 "supported_io_types": { 00:06:35.338 "read": true, 00:06:35.338 "write": true, 00:06:35.338 "unmap": true, 00:06:35.338 "flush": true, 00:06:35.338 "reset": true, 00:06:35.338 "nvme_admin": false, 00:06:35.338 "nvme_io": false, 00:06:35.338 "nvme_io_md": false, 00:06:35.338 "write_zeroes": true, 00:06:35.338 "zcopy": true, 00:06:35.338 "get_zone_info": false, 00:06:35.338 "zone_management": false, 00:06:35.338 "zone_append": false, 00:06:35.338 "compare": false, 00:06:35.338 "compare_and_write": false, 00:06:35.338 "abort": true, 00:06:35.338 "seek_hole": false, 00:06:35.338 "seek_data": false, 00:06:35.338 "copy": true, 00:06:35.338 "nvme_iov_md": false 00:06:35.338 }, 00:06:35.338 "memory_domains": [ 00:06:35.338 { 00:06:35.338 "dma_device_id": "system", 00:06:35.338 "dma_device_type": 1 00:06:35.338 }, 00:06:35.338 { 00:06:35.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.338 "dma_device_type": 2 00:06:35.338 } 00:06:35.338 ], 00:06:35.338 "driver_specific": { 00:06:35.338 "passthru": { 00:06:35.338 "name": "Passthru0", 00:06:35.338 "base_bdev_name": "Malloc2" 00:06:35.338 } 00:06:35.338 } 00:06:35.338 } 00:06:35.338 ]' 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:35.338 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:35.599 15:06:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:35.599 00:06:35.599 real 0m0.319s 00:06:35.599 user 0m0.185s 00:06:35.599 sys 0m0.068s 00:06:35.599 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.599 15:06:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.599 ************************************ 00:06:35.599 END TEST rpc_daemon_integrity 00:06:35.599 ************************************ 00:06:35.599 15:06:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:35.599 15:06:33 rpc -- rpc/rpc.sh@84 -- # killprocess 70651 00:06:35.599 15:06:33 rpc -- common/autotest_common.sh@950 -- # '[' -z 70651 ']' 00:06:35.599 15:06:33 rpc -- common/autotest_common.sh@954 -- # kill -0 70651 00:06:35.599 15:06:33 rpc -- common/autotest_common.sh@955 -- # uname 00:06:35.599 15:06:33 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.599 15:06:33 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70651 00:06:35.599 15:06:34 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.599 killing process with pid 70651 00:06:35.599 15:06:34 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.599 15:06:34 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70651' 00:06:35.599 15:06:34 rpc -- common/autotest_common.sh@969 -- # kill 70651 00:06:35.599 15:06:34 rpc -- common/autotest_common.sh@974 -- # wait 70651 00:06:36.167 00:06:36.167 real 0m2.946s 00:06:36.167 user 0m3.454s 00:06:36.167 sys 0m0.939s 00:06:36.167 15:06:34 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.167 15:06:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.167 ************************************ 00:06:36.167 END TEST rpc 00:06:36.167 ************************************ 00:06:36.167 15:06:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:36.167 15:06:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.167 15:06:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.167 15:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:36.167 ************************************ 00:06:36.167 START TEST skip_rpc 00:06:36.167 ************************************ 00:06:36.167 15:06:34 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:36.167 * Looking for test storage... 00:06:36.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:36.167 15:06:34 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:36.167 15:06:34 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:36.167 15:06:34 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:36.167 15:06:34 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.167 15:06:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:36.167 15:06:34 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.167 15:06:34 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:36.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.167 --rc genhtml_branch_coverage=1 00:06:36.167 --rc genhtml_function_coverage=1 00:06:36.167 --rc genhtml_legend=1 00:06:36.167 --rc geninfo_all_blocks=1 00:06:36.167 --rc geninfo_unexecuted_blocks=1 00:06:36.167 00:06:36.167 ' 00:06:36.167 15:06:34 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:36.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.167 --rc genhtml_branch_coverage=1 00:06:36.167 --rc genhtml_function_coverage=1 00:06:36.167 --rc genhtml_legend=1 00:06:36.167 --rc geninfo_all_blocks=1 00:06:36.167 --rc geninfo_unexecuted_blocks=1 00:06:36.167 00:06:36.167 ' 00:06:36.167 15:06:34 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:36.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.168 --rc genhtml_branch_coverage=1 00:06:36.168 --rc genhtml_function_coverage=1 00:06:36.168 --rc genhtml_legend=1 00:06:36.168 --rc geninfo_all_blocks=1 00:06:36.168 --rc geninfo_unexecuted_blocks=1 00:06:36.168 00:06:36.168 ' 00:06:36.168 15:06:34 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:36.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.168 --rc genhtml_branch_coverage=1 00:06:36.168 --rc genhtml_function_coverage=1 00:06:36.168 --rc genhtml_legend=1 00:06:36.168 --rc geninfo_all_blocks=1 00:06:36.168 --rc geninfo_unexecuted_blocks=1 00:06:36.168 00:06:36.168 ' 00:06:36.168 15:06:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:36.168 15:06:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:36.168 15:06:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:36.168 15:06:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.168 15:06:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.168 15:06:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.426 ************************************ 00:06:36.426 START TEST skip_rpc 00:06:36.426 ************************************ 00:06:36.426 15:06:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:36.426 15:06:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70858 00:06:36.426 15:06:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:36.426 15:06:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.426 15:06:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:36.426 [2024-10-01 15:06:34.828124] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:36.426 [2024-10-01 15:06:34.828285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70858 ] 00:06:36.685 [2024-10-01 15:06:34.996928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.685 [2024-10-01 15:06:35.045040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70858 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70858 ']' 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70858 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70858 00:06:41.952 killing process with pid 70858 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70858' 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70858 00:06:41.952 15:06:39 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70858 00:06:41.952 ************************************ 00:06:41.952 END TEST skip_rpc 00:06:41.952 ************************************ 00:06:41.952 00:06:41.952 real 0m5.476s 00:06:41.952 user 0m5.039s 00:06:41.952 sys 0m0.366s 00:06:41.952 15:06:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.952 15:06:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.952 15:06:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:41.952 15:06:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.953 15:06:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.953 15:06:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.953 ************************************ 00:06:41.953 START TEST skip_rpc_with_json 00:06:41.953 ************************************ 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70946 00:06:41.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70946 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70946 ']' 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.953 15:06:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:41.953 [2024-10-01 15:06:40.409118] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:41.953 [2024-10-01 15:06:40.409285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70946 ] 00:06:42.211 [2024-10-01 15:06:40.577402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.211 [2024-10-01 15:06:40.621548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:42.779 [2024-10-01 15:06:41.279386] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:42.779 request: 00:06:42.779 { 00:06:42.779 "trtype": "tcp", 00:06:42.779 "method": "nvmf_get_transports", 00:06:42.779 "req_id": 1 00:06:42.779 } 00:06:42.779 Got JSON-RPC error response 00:06:42.779 response: 00:06:42.779 { 00:06:42.779 "code": -19, 00:06:42.779 "message": "No such device" 00:06:42.779 } 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:42.779 [2024-10-01 15:06:41.295455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.779 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.037 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.037 15:06:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:43.037 { 00:06:43.038 "subsystems": [ 00:06:43.038 { 00:06:43.038 "subsystem": "fsdev", 00:06:43.038 "config": [ 00:06:43.038 { 00:06:43.038 "method": "fsdev_set_opts", 00:06:43.038 "params": { 00:06:43.038 "fsdev_io_pool_size": 65535, 00:06:43.038 "fsdev_io_cache_size": 256 00:06:43.038 } 00:06:43.038 } 00:06:43.038 ] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "keyring", 00:06:43.038 "config": [] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "iobuf", 00:06:43.038 "config": [ 00:06:43.038 { 00:06:43.038 "method": "iobuf_set_options", 00:06:43.038 "params": { 00:06:43.038 "small_pool_count": 8192, 00:06:43.038 "large_pool_count": 1024, 00:06:43.038 "small_bufsize": 8192, 00:06:43.038 "large_bufsize": 135168 00:06:43.038 } 00:06:43.038 } 00:06:43.038 ] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "sock", 00:06:43.038 "config": [ 00:06:43.038 { 00:06:43.038 "method": "sock_set_default_impl", 00:06:43.038 "params": { 00:06:43.038 "impl_name": "posix" 00:06:43.038 } 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "method": "sock_impl_set_options", 00:06:43.038 "params": { 00:06:43.038 "impl_name": "ssl", 00:06:43.038 "recv_buf_size": 4096, 00:06:43.038 "send_buf_size": 4096, 00:06:43.038 "enable_recv_pipe": true, 00:06:43.038 "enable_quickack": false, 00:06:43.038 "enable_placement_id": 0, 00:06:43.038 "enable_zerocopy_send_server": true, 00:06:43.038 "enable_zerocopy_send_client": false, 00:06:43.038 "zerocopy_threshold": 0, 00:06:43.038 "tls_version": 0, 00:06:43.038 "enable_ktls": false 00:06:43.038 } 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "method": "sock_impl_set_options", 00:06:43.038 "params": { 00:06:43.038 "impl_name": "posix", 00:06:43.038 "recv_buf_size": 2097152, 00:06:43.038 "send_buf_size": 2097152, 00:06:43.038 "enable_recv_pipe": true, 00:06:43.038 "enable_quickack": false, 00:06:43.038 "enable_placement_id": 0, 00:06:43.038 "enable_zerocopy_send_server": true, 00:06:43.038 "enable_zerocopy_send_client": false, 00:06:43.038 "zerocopy_threshold": 0, 00:06:43.038 "tls_version": 0, 00:06:43.038 "enable_ktls": false 00:06:43.038 } 00:06:43.038 } 00:06:43.038 ] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "vmd", 00:06:43.038 "config": [] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "accel", 00:06:43.038 "config": [ 00:06:43.038 { 00:06:43.038 "method": "accel_set_options", 00:06:43.038 "params": { 00:06:43.038 "small_cache_size": 128, 00:06:43.038 "large_cache_size": 16, 00:06:43.038 "task_count": 2048, 00:06:43.038 "sequence_count": 2048, 00:06:43.038 "buf_count": 2048 00:06:43.038 } 00:06:43.038 } 00:06:43.038 ] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "bdev", 00:06:43.038 "config": [ 00:06:43.038 { 00:06:43.038 "method": "bdev_set_options", 00:06:43.038 "params": { 00:06:43.038 "bdev_io_pool_size": 65535, 00:06:43.038 "bdev_io_cache_size": 256, 00:06:43.038 "bdev_auto_examine": true, 00:06:43.038 "iobuf_small_cache_size": 128, 00:06:43.038 "iobuf_large_cache_size": 16 00:06:43.038 } 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "method": "bdev_raid_set_options", 00:06:43.038 "params": { 00:06:43.038 "process_window_size_kb": 1024, 00:06:43.038 "process_max_bandwidth_mb_sec": 0 00:06:43.038 } 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "method": "bdev_iscsi_set_options", 00:06:43.038 "params": { 00:06:43.038 "timeout_sec": 30 00:06:43.038 } 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "method": "bdev_nvme_set_options", 00:06:43.038 "params": { 00:06:43.038 "action_on_timeout": "none", 00:06:43.038 "timeout_us": 0, 00:06:43.038 "timeout_admin_us": 0, 00:06:43.038 "keep_alive_timeout_ms": 10000, 00:06:43.038 "arbitration_burst": 0, 00:06:43.038 "low_priority_weight": 0, 00:06:43.038 "medium_priority_weight": 0, 00:06:43.038 "high_priority_weight": 0, 00:06:43.038 "nvme_adminq_poll_period_us": 10000, 00:06:43.038 "nvme_ioq_poll_period_us": 0, 00:06:43.038 "io_queue_requests": 0, 00:06:43.038 "delay_cmd_submit": true, 00:06:43.038 "transport_retry_count": 4, 00:06:43.038 "bdev_retry_count": 3, 00:06:43.038 "transport_ack_timeout": 0, 00:06:43.038 "ctrlr_loss_timeout_sec": 0, 00:06:43.038 "reconnect_delay_sec": 0, 00:06:43.038 "fast_io_fail_timeout_sec": 0, 00:06:43.038 "disable_auto_failback": false, 00:06:43.038 "generate_uuids": false, 00:06:43.038 "transport_tos": 0, 00:06:43.038 "nvme_error_stat": false, 00:06:43.038 "rdma_srq_size": 0, 00:06:43.038 "io_path_stat": false, 00:06:43.038 "allow_accel_sequence": false, 00:06:43.038 "rdma_max_cq_size": 0, 00:06:43.038 "rdma_cm_event_timeout_ms": 0, 00:06:43.038 "dhchap_digests": [ 00:06:43.038 "sha256", 00:06:43.038 "sha384", 00:06:43.038 "sha512" 00:06:43.038 ], 00:06:43.038 "dhchap_dhgroups": [ 00:06:43.038 "null", 00:06:43.038 "ffdhe2048", 00:06:43.038 "ffdhe3072", 00:06:43.038 "ffdhe4096", 00:06:43.038 "ffdhe6144", 00:06:43.038 "ffdhe8192" 00:06:43.038 ] 00:06:43.038 } 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "method": "bdev_nvme_set_hotplug", 00:06:43.038 "params": { 00:06:43.038 "period_us": 100000, 00:06:43.038 "enable": false 00:06:43.038 } 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "method": "bdev_wait_for_examine" 00:06:43.038 } 00:06:43.038 ] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "scsi", 00:06:43.038 "config": null 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "scheduler", 00:06:43.038 "config": [ 00:06:43.038 { 00:06:43.038 "method": "framework_set_scheduler", 00:06:43.038 "params": { 00:06:43.038 "name": "static" 00:06:43.038 } 00:06:43.038 } 00:06:43.038 ] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "vhost_scsi", 00:06:43.038 "config": [] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "vhost_blk", 00:06:43.038 "config": [] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "ublk", 00:06:43.038 "config": [] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "nbd", 00:06:43.038 "config": [] 00:06:43.038 }, 00:06:43.038 { 00:06:43.038 "subsystem": "nvmf", 00:06:43.038 "config": [ 00:06:43.038 { 00:06:43.039 "method": "nvmf_set_config", 00:06:43.039 "params": { 00:06:43.039 "discovery_filter": "match_any", 00:06:43.039 "admin_cmd_passthru": { 00:06:43.039 "identify_ctrlr": false 00:06:43.039 }, 00:06:43.039 "dhchap_digests": [ 00:06:43.039 "sha256", 00:06:43.039 "sha384", 00:06:43.039 "sha512" 00:06:43.039 ], 00:06:43.039 "dhchap_dhgroups": [ 00:06:43.039 "null", 00:06:43.039 "ffdhe2048", 00:06:43.039 "ffdhe3072", 00:06:43.039 "ffdhe4096", 00:06:43.039 "ffdhe6144", 00:06:43.039 "ffdhe8192" 00:06:43.039 ] 00:06:43.039 } 00:06:43.039 }, 00:06:43.039 { 00:06:43.039 "method": "nvmf_set_max_subsystems", 00:06:43.039 "params": { 00:06:43.039 "max_subsystems": 1024 00:06:43.039 } 00:06:43.039 }, 00:06:43.039 { 00:06:43.039 "method": "nvmf_set_crdt", 00:06:43.039 "params": { 00:06:43.039 "crdt1": 0, 00:06:43.039 "crdt2": 0, 00:06:43.039 "crdt3": 0 00:06:43.039 } 00:06:43.039 }, 00:06:43.039 { 00:06:43.039 "method": "nvmf_create_transport", 00:06:43.039 "params": { 00:06:43.039 "trtype": "TCP", 00:06:43.039 "max_queue_depth": 128, 00:06:43.039 "max_io_qpairs_per_ctrlr": 127, 00:06:43.039 "in_capsule_data_size": 4096, 00:06:43.039 "max_io_size": 131072, 00:06:43.039 "io_unit_size": 131072, 00:06:43.039 "max_aq_depth": 128, 00:06:43.039 "num_shared_buffers": 511, 00:06:43.039 "buf_cache_size": 4294967295, 00:06:43.039 "dif_insert_or_strip": false, 00:06:43.039 "zcopy": false, 00:06:43.039 "c2h_success": true, 00:06:43.039 "sock_priority": 0, 00:06:43.039 "abort_timeout_sec": 1, 00:06:43.039 "ack_timeout": 0, 00:06:43.039 "data_wr_pool_size": 0 00:06:43.039 } 00:06:43.039 } 00:06:43.039 ] 00:06:43.039 }, 00:06:43.039 { 00:06:43.039 "subsystem": "iscsi", 00:06:43.039 "config": [ 00:06:43.039 { 00:06:43.039 "method": "iscsi_set_options", 00:06:43.039 "params": { 00:06:43.039 "node_base": "iqn.2016-06.io.spdk", 00:06:43.039 "max_sessions": 128, 00:06:43.039 "max_connections_per_session": 2, 00:06:43.039 "max_queue_depth": 64, 00:06:43.039 "default_time2wait": 2, 00:06:43.039 "default_time2retain": 20, 00:06:43.039 "first_burst_length": 8192, 00:06:43.039 "immediate_data": true, 00:06:43.039 "allow_duplicated_isid": false, 00:06:43.039 "error_recovery_level": 0, 00:06:43.039 "nop_timeout": 60, 00:06:43.039 "nop_in_interval": 30, 00:06:43.039 "disable_chap": false, 00:06:43.039 "require_chap": false, 00:06:43.039 "mutual_chap": false, 00:06:43.039 "chap_group": 0, 00:06:43.039 "max_large_datain_per_connection": 64, 00:06:43.039 "max_r2t_per_connection": 4, 00:06:43.039 "pdu_pool_size": 36864, 00:06:43.039 "immediate_data_pool_size": 16384, 00:06:43.039 "data_out_pool_size": 2048 00:06:43.039 } 00:06:43.039 } 00:06:43.039 ] 00:06:43.039 } 00:06:43.039 ] 00:06:43.039 } 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70946 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70946 ']' 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70946 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70946 00:06:43.039 killing process with pid 70946 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70946' 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70946 00:06:43.039 15:06:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70946 00:06:43.605 15:06:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70974 00:06:43.605 15:06:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:43.605 15:06:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70974 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70974 ']' 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70974 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70974 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70974' 00:06:48.893 killing process with pid 70974 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70974 00:06:48.893 15:06:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70974 00:06:48.893 15:06:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:48.893 15:06:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:48.893 00:06:48.893 real 0m7.133s 00:06:48.893 user 0m6.685s 00:06:48.893 sys 0m0.844s 00:06:48.893 15:06:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.893 15:06:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:48.893 ************************************ 00:06:48.893 END TEST skip_rpc_with_json 00:06:48.893 ************************************ 00:06:49.153 15:06:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:49.153 15:06:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.153 15:06:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.153 15:06:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.153 ************************************ 00:06:49.153 START TEST skip_rpc_with_delay 00:06:49.153 ************************************ 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:49.153 [2024-10-01 15:06:47.577739] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:49.153 [2024-10-01 15:06:47.577897] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.153 00:06:49.153 real 0m0.176s 00:06:49.153 user 0m0.084s 00:06:49.153 sys 0m0.089s 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.153 15:06:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:49.153 ************************************ 00:06:49.153 END TEST skip_rpc_with_delay 00:06:49.153 ************************************ 00:06:49.412 15:06:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:49.412 15:06:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:49.412 15:06:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:49.412 15:06:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.412 15:06:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.412 15:06:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.412 ************************************ 00:06:49.412 START TEST exit_on_failed_rpc_init 00:06:49.412 ************************************ 00:06:49.412 15:06:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:49.412 15:06:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71086 00:06:49.412 15:06:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.412 15:06:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71086 00:06:49.412 15:06:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 71086 ']' 00:06:49.412 15:06:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.412 15:06:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.412 15:06:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.412 15:06:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.412 15:06:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:49.412 [2024-10-01 15:06:47.830010] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:49.412 [2024-10-01 15:06:47.830155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71086 ] 00:06:49.671 [2024-10-01 15:06:47.999817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.671 [2024-10-01 15:06:48.046333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:50.240 15:06:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:50.240 [2024-10-01 15:06:48.766646] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:50.240 [2024-10-01 15:06:48.766969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71104 ] 00:06:50.499 [2024-10-01 15:06:48.939919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.499 [2024-10-01 15:06:48.989089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.499 [2024-10-01 15:06:48.989421] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:50.499 [2024-10-01 15:06:48.989574] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:50.499 [2024-10-01 15:06:48.989628] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71086 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 71086 ']' 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 71086 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71086 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71086' 00:06:50.757 killing process with pid 71086 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 71086 00:06:50.757 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 71086 00:06:51.325 00:06:51.325 real 0m1.864s 00:06:51.325 user 0m1.973s 00:06:51.325 sys 0m0.610s 00:06:51.325 ************************************ 00:06:51.325 END TEST exit_on_failed_rpc_init 00:06:51.325 ************************************ 00:06:51.325 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.325 15:06:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:51.325 15:06:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:51.325 00:06:51.325 real 0m15.174s 00:06:51.325 user 0m14.005s 00:06:51.325 sys 0m2.214s 00:06:51.325 15:06:49 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.325 15:06:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.325 ************************************ 00:06:51.325 END TEST skip_rpc 00:06:51.325 ************************************ 00:06:51.325 15:06:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:51.325 15:06:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.325 15:06:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.325 15:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:51.325 ************************************ 00:06:51.325 START TEST rpc_client 00:06:51.325 ************************************ 00:06:51.325 15:06:49 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:51.325 * Looking for test storage... 00:06:51.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:51.325 15:06:49 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.325 15:06:49 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.325 15:06:49 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.585 15:06:49 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.585 15:06:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:51.585 15:06:49 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.585 15:06:49 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.585 --rc genhtml_branch_coverage=1 00:06:51.585 --rc genhtml_function_coverage=1 00:06:51.585 --rc genhtml_legend=1 00:06:51.585 --rc geninfo_all_blocks=1 00:06:51.585 --rc geninfo_unexecuted_blocks=1 00:06:51.585 00:06:51.585 ' 00:06:51.585 15:06:49 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.585 --rc genhtml_branch_coverage=1 00:06:51.585 --rc genhtml_function_coverage=1 00:06:51.585 --rc genhtml_legend=1 00:06:51.585 --rc geninfo_all_blocks=1 00:06:51.585 --rc geninfo_unexecuted_blocks=1 00:06:51.585 00:06:51.585 ' 00:06:51.585 15:06:49 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.585 --rc genhtml_branch_coverage=1 00:06:51.585 --rc genhtml_function_coverage=1 00:06:51.585 --rc genhtml_legend=1 00:06:51.585 --rc geninfo_all_blocks=1 00:06:51.585 --rc geninfo_unexecuted_blocks=1 00:06:51.585 00:06:51.585 ' 00:06:51.585 15:06:49 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:51.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.585 --rc genhtml_branch_coverage=1 00:06:51.585 --rc genhtml_function_coverage=1 00:06:51.585 --rc genhtml_legend=1 00:06:51.585 --rc geninfo_all_blocks=1 00:06:51.585 --rc geninfo_unexecuted_blocks=1 00:06:51.585 00:06:51.585 ' 00:06:51.585 15:06:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:51.585 OK 00:06:51.586 15:06:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:51.586 00:06:51.586 real 0m0.310s 00:06:51.586 user 0m0.154s 00:06:51.586 sys 0m0.169s 00:06:51.586 ************************************ 00:06:51.586 END TEST rpc_client 00:06:51.586 ************************************ 00:06:51.586 15:06:50 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.586 15:06:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:51.586 15:06:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:51.586 15:06:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.586 15:06:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.586 15:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:51.586 ************************************ 00:06:51.586 START TEST json_config 00:06:51.586 ************************************ 00:06:51.586 15:06:50 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:51.863 15:06:50 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.863 15:06:50 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.863 15:06:50 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.863 15:06:50 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.863 15:06:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.863 15:06:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.863 15:06:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.863 15:06:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.863 15:06:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.863 15:06:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.863 15:06:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.863 15:06:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.863 15:06:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.863 15:06:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.863 15:06:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.863 15:06:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:51.863 15:06:50 json_config -- scripts/common.sh@345 -- # : 1 00:06:51.863 15:06:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.863 15:06:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.863 15:06:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:51.863 15:06:50 json_config -- scripts/common.sh@353 -- # local d=1 00:06:51.863 15:06:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.863 15:06:50 json_config -- scripts/common.sh@355 -- # echo 1 00:06:51.863 15:06:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.863 15:06:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:51.863 15:06:50 json_config -- scripts/common.sh@353 -- # local d=2 00:06:51.863 15:06:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.863 15:06:50 json_config -- scripts/common.sh@355 -- # echo 2 00:06:51.863 15:06:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.863 15:06:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.863 15:06:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.863 15:06:50 json_config -- scripts/common.sh@368 -- # return 0 00:06:51.863 15:06:50 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.863 15:06:50 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.863 --rc genhtml_branch_coverage=1 00:06:51.863 --rc genhtml_function_coverage=1 00:06:51.863 --rc genhtml_legend=1 00:06:51.863 --rc geninfo_all_blocks=1 00:06:51.863 --rc geninfo_unexecuted_blocks=1 00:06:51.863 00:06:51.863 ' 00:06:51.863 15:06:50 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.863 --rc genhtml_branch_coverage=1 00:06:51.863 --rc genhtml_function_coverage=1 00:06:51.863 --rc genhtml_legend=1 00:06:51.863 --rc geninfo_all_blocks=1 00:06:51.863 --rc geninfo_unexecuted_blocks=1 00:06:51.863 00:06:51.863 ' 00:06:51.863 15:06:50 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.863 --rc genhtml_branch_coverage=1 00:06:51.863 --rc genhtml_function_coverage=1 00:06:51.863 --rc genhtml_legend=1 00:06:51.863 --rc geninfo_all_blocks=1 00:06:51.863 --rc geninfo_unexecuted_blocks=1 00:06:51.863 00:06:51.863 ' 00:06:51.863 15:06:50 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:51.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.863 --rc genhtml_branch_coverage=1 00:06:51.863 --rc genhtml_function_coverage=1 00:06:51.863 --rc genhtml_legend=1 00:06:51.863 --rc geninfo_all_blocks=1 00:06:51.863 --rc geninfo_unexecuted_blocks=1 00:06:51.863 00:06:51.863 ' 00:06:51.863 15:06:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5c618085-b29b-41c6-81e1-184c2b306579 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5c618085-b29b-41c6-81e1-184c2b306579 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.863 15:06:50 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.863 15:06:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.863 15:06:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.863 15:06:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.863 15:06:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.864 15:06:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.864 15:06:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.864 15:06:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.864 15:06:50 json_config -- paths/export.sh@5 -- # export PATH 00:06:51.864 15:06:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.864 15:06:50 json_config -- nvmf/common.sh@51 -- # : 0 00:06:51.864 15:06:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:51.864 15:06:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:51.864 15:06:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.864 15:06:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.864 15:06:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.864 15:06:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:51.864 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:51.864 15:06:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:51.864 15:06:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:51.864 15:06:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:51.864 15:06:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:51.864 15:06:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:51.864 15:06:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:51.864 15:06:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:51.864 15:06:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:51.864 15:06:50 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:51.864 WARNING: No tests are enabled so not running JSON configuration tests 00:06:51.864 15:06:50 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:51.864 00:06:51.864 real 0m0.241s 00:06:51.864 user 0m0.128s 00:06:51.864 sys 0m0.104s 00:06:51.864 15:06:50 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.864 15:06:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:51.864 ************************************ 00:06:51.864 END TEST json_config 00:06:51.864 ************************************ 00:06:51.864 15:06:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:51.864 15:06:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.864 15:06:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.864 15:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.124 ************************************ 00:06:52.124 START TEST json_config_extra_key 00:06:52.124 ************************************ 00:06:52.124 15:06:50 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:52.124 15:06:50 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:52.124 15:06:50 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:52.124 15:06:50 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:52.124 15:06:50 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.124 15:06:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:52.124 15:06:50 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.124 15:06:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.124 --rc genhtml_branch_coverage=1 00:06:52.124 --rc genhtml_function_coverage=1 00:06:52.124 --rc genhtml_legend=1 00:06:52.124 --rc geninfo_all_blocks=1 00:06:52.124 --rc geninfo_unexecuted_blocks=1 00:06:52.124 00:06:52.124 ' 00:06:52.124 15:06:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.124 --rc genhtml_branch_coverage=1 00:06:52.124 --rc genhtml_function_coverage=1 00:06:52.124 --rc genhtml_legend=1 00:06:52.124 --rc geninfo_all_blocks=1 00:06:52.124 --rc geninfo_unexecuted_blocks=1 00:06:52.124 00:06:52.124 ' 00:06:52.124 15:06:50 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.124 --rc genhtml_branch_coverage=1 00:06:52.124 --rc genhtml_function_coverage=1 00:06:52.124 --rc genhtml_legend=1 00:06:52.124 --rc geninfo_all_blocks=1 00:06:52.124 --rc geninfo_unexecuted_blocks=1 00:06:52.124 00:06:52.124 ' 00:06:52.124 15:06:50 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.124 --rc genhtml_branch_coverage=1 00:06:52.124 --rc genhtml_function_coverage=1 00:06:52.124 --rc genhtml_legend=1 00:06:52.124 --rc geninfo_all_blocks=1 00:06:52.124 --rc geninfo_unexecuted_blocks=1 00:06:52.124 00:06:52.124 ' 00:06:52.124 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5c618085-b29b-41c6-81e1-184c2b306579 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5c618085-b29b-41c6-81e1-184c2b306579 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.124 15:06:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.125 15:06:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.125 15:06:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.125 15:06:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.125 15:06:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.125 15:06:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.125 15:06:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.125 15:06:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.125 15:06:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:52.125 15:06:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.125 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.125 15:06:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:52.125 INFO: launching applications... 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:52.125 15:06:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71292 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:52.125 Waiting for target to run... 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71292 /var/tmp/spdk_tgt.sock 00:06:52.125 15:06:50 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 71292 ']' 00:06:52.125 15:06:50 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:52.125 15:06:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:52.125 15:06:50 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.125 15:06:50 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:52.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:52.125 15:06:50 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.125 15:06:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:52.384 [2024-10-01 15:06:50.749058] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:52.384 [2024-10-01 15:06:50.749234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71292 ] 00:06:52.650 [2024-10-01 15:06:51.135728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.650 [2024-10-01 15:06:51.167225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.217 00:06:53.217 INFO: shutting down applications... 00:06:53.217 15:06:51 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.217 15:06:51 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:53.217 15:06:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:53.217 15:06:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:53.217 15:06:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:53.217 15:06:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:53.217 15:06:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:53.217 15:06:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71292 ]] 00:06:53.217 15:06:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71292 00:06:53.217 15:06:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:53.217 15:06:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:53.217 15:06:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71292 00:06:53.217 15:06:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:53.785 15:06:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:53.785 15:06:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:53.785 15:06:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71292 00:06:53.785 15:06:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:53.785 15:06:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:53.785 15:06:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:53.785 SPDK target shutdown done 00:06:53.785 15:06:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:53.785 Success 00:06:53.785 15:06:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:53.785 00:06:53.785 real 0m1.657s 00:06:53.785 user 0m1.357s 00:06:53.785 sys 0m0.532s 00:06:53.785 15:06:52 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.785 15:06:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:53.785 ************************************ 00:06:53.785 END TEST json_config_extra_key 00:06:53.785 ************************************ 00:06:53.785 15:06:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.785 15:06:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.785 15:06:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.785 15:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:53.785 ************************************ 00:06:53.785 START TEST alias_rpc 00:06:53.785 ************************************ 00:06:53.785 15:06:52 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.785 * Looking for test storage... 00:06:53.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:53.785 15:06:52 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.785 15:06:52 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.785 15:06:52 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.042 15:06:52 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.042 15:06:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:54.042 15:06:52 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.042 15:06:52 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.042 --rc genhtml_branch_coverage=1 00:06:54.042 --rc genhtml_function_coverage=1 00:06:54.042 --rc genhtml_legend=1 00:06:54.042 --rc geninfo_all_blocks=1 00:06:54.042 --rc geninfo_unexecuted_blocks=1 00:06:54.042 00:06:54.042 ' 00:06:54.042 15:06:52 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.042 --rc genhtml_branch_coverage=1 00:06:54.042 --rc genhtml_function_coverage=1 00:06:54.042 --rc genhtml_legend=1 00:06:54.043 --rc geninfo_all_blocks=1 00:06:54.043 --rc geninfo_unexecuted_blocks=1 00:06:54.043 00:06:54.043 ' 00:06:54.043 15:06:52 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.043 --rc genhtml_branch_coverage=1 00:06:54.043 --rc genhtml_function_coverage=1 00:06:54.043 --rc genhtml_legend=1 00:06:54.043 --rc geninfo_all_blocks=1 00:06:54.043 --rc geninfo_unexecuted_blocks=1 00:06:54.043 00:06:54.043 ' 00:06:54.043 15:06:52 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.043 --rc genhtml_branch_coverage=1 00:06:54.043 --rc genhtml_function_coverage=1 00:06:54.043 --rc genhtml_legend=1 00:06:54.043 --rc geninfo_all_blocks=1 00:06:54.043 --rc geninfo_unexecuted_blocks=1 00:06:54.043 00:06:54.043 ' 00:06:54.043 15:06:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:54.043 15:06:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71371 00:06:54.043 15:06:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71371 00:06:54.043 15:06:52 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 71371 ']' 00:06:54.043 15:06:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:54.043 15:06:52 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.043 15:06:52 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.043 15:06:52 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.043 15:06:52 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.043 15:06:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.043 [2024-10-01 15:06:52.481301] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:54.043 [2024-10-01 15:06:52.481450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71371 ] 00:06:54.300 [2024-10-01 15:06:52.646492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.300 [2024-10-01 15:06:52.695217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.868 15:06:53 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.868 15:06:53 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:54.868 15:06:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:55.128 15:06:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71371 00:06:55.128 15:06:53 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 71371 ']' 00:06:55.128 15:06:53 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 71371 00:06:55.128 15:06:53 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:55.128 15:06:53 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.128 15:06:53 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71371 00:06:55.128 15:06:53 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.128 15:06:53 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.128 killing process with pid 71371 00:06:55.128 15:06:53 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71371' 00:06:55.128 15:06:53 alias_rpc -- common/autotest_common.sh@969 -- # kill 71371 00:06:55.128 15:06:53 alias_rpc -- common/autotest_common.sh@974 -- # wait 71371 00:06:55.696 00:06:55.696 real 0m1.861s 00:06:55.696 user 0m1.853s 00:06:55.696 sys 0m0.569s 00:06:55.696 15:06:54 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.696 15:06:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.696 ************************************ 00:06:55.696 END TEST alias_rpc 00:06:55.696 ************************************ 00:06:55.696 15:06:54 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:55.696 15:06:54 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:55.696 15:06:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.696 15:06:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.696 15:06:54 -- common/autotest_common.sh@10 -- # set +x 00:06:55.696 ************************************ 00:06:55.696 START TEST spdkcli_tcp 00:06:55.696 ************************************ 00:06:55.696 15:06:54 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:55.696 * Looking for test storage... 00:06:55.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:55.696 15:06:54 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.696 15:06:54 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.696 15:06:54 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.956 15:06:54 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.956 --rc genhtml_branch_coverage=1 00:06:55.956 --rc genhtml_function_coverage=1 00:06:55.956 --rc genhtml_legend=1 00:06:55.956 --rc geninfo_all_blocks=1 00:06:55.956 --rc geninfo_unexecuted_blocks=1 00:06:55.956 00:06:55.956 ' 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.956 --rc genhtml_branch_coverage=1 00:06:55.956 --rc genhtml_function_coverage=1 00:06:55.956 --rc genhtml_legend=1 00:06:55.956 --rc geninfo_all_blocks=1 00:06:55.956 --rc geninfo_unexecuted_blocks=1 00:06:55.956 00:06:55.956 ' 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.956 --rc genhtml_branch_coverage=1 00:06:55.956 --rc genhtml_function_coverage=1 00:06:55.956 --rc genhtml_legend=1 00:06:55.956 --rc geninfo_all_blocks=1 00:06:55.956 --rc geninfo_unexecuted_blocks=1 00:06:55.956 00:06:55.956 ' 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.956 --rc genhtml_branch_coverage=1 00:06:55.956 --rc genhtml_function_coverage=1 00:06:55.956 --rc genhtml_legend=1 00:06:55.956 --rc geninfo_all_blocks=1 00:06:55.956 --rc geninfo_unexecuted_blocks=1 00:06:55.956 00:06:55.956 ' 00:06:55.956 15:06:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:55.956 15:06:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:55.956 15:06:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:55.956 15:06:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:55.956 15:06:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:55.956 15:06:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:55.956 15:06:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.956 15:06:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71445 00:06:55.956 15:06:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:55.956 15:06:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71445 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 71445 ']' 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.956 15:06:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.956 [2024-10-01 15:06:54.429281] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:55.956 [2024-10-01 15:06:54.429531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71445 ] 00:06:56.298 [2024-10-01 15:06:54.601359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.298 [2024-10-01 15:06:54.651336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.298 [2024-10-01 15:06:54.651434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.865 15:06:55 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.865 15:06:55 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:56.865 15:06:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71462 00:06:56.865 15:06:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:56.865 15:06:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:57.125 [ 00:06:57.126 "bdev_malloc_delete", 00:06:57.126 "bdev_malloc_create", 00:06:57.126 "bdev_null_resize", 00:06:57.126 "bdev_null_delete", 00:06:57.126 "bdev_null_create", 00:06:57.126 "bdev_nvme_cuse_unregister", 00:06:57.126 "bdev_nvme_cuse_register", 00:06:57.126 "bdev_opal_new_user", 00:06:57.126 "bdev_opal_set_lock_state", 00:06:57.126 "bdev_opal_delete", 00:06:57.126 "bdev_opal_get_info", 00:06:57.126 "bdev_opal_create", 00:06:57.126 "bdev_nvme_opal_revert", 00:06:57.126 "bdev_nvme_opal_init", 00:06:57.126 "bdev_nvme_send_cmd", 00:06:57.126 "bdev_nvme_set_keys", 00:06:57.126 "bdev_nvme_get_path_iostat", 00:06:57.126 "bdev_nvme_get_mdns_discovery_info", 00:06:57.126 "bdev_nvme_stop_mdns_discovery", 00:06:57.126 "bdev_nvme_start_mdns_discovery", 00:06:57.126 "bdev_nvme_set_multipath_policy", 00:06:57.126 "bdev_nvme_set_preferred_path", 00:06:57.126 "bdev_nvme_get_io_paths", 00:06:57.126 "bdev_nvme_remove_error_injection", 00:06:57.126 "bdev_nvme_add_error_injection", 00:06:57.126 "bdev_nvme_get_discovery_info", 00:06:57.126 "bdev_nvme_stop_discovery", 00:06:57.126 "bdev_nvme_start_discovery", 00:06:57.126 "bdev_nvme_get_controller_health_info", 00:06:57.126 "bdev_nvme_disable_controller", 00:06:57.126 "bdev_nvme_enable_controller", 00:06:57.126 "bdev_nvme_reset_controller", 00:06:57.126 "bdev_nvme_get_transport_statistics", 00:06:57.126 "bdev_nvme_apply_firmware", 00:06:57.126 "bdev_nvme_detach_controller", 00:06:57.126 "bdev_nvme_get_controllers", 00:06:57.126 "bdev_nvme_attach_controller", 00:06:57.126 "bdev_nvme_set_hotplug", 00:06:57.126 "bdev_nvme_set_options", 00:06:57.126 "bdev_passthru_delete", 00:06:57.126 "bdev_passthru_create", 00:06:57.126 "bdev_lvol_set_parent_bdev", 00:06:57.126 "bdev_lvol_set_parent", 00:06:57.126 "bdev_lvol_check_shallow_copy", 00:06:57.126 "bdev_lvol_start_shallow_copy", 00:06:57.126 "bdev_lvol_grow_lvstore", 00:06:57.126 "bdev_lvol_get_lvols", 00:06:57.126 "bdev_lvol_get_lvstores", 00:06:57.126 "bdev_lvol_delete", 00:06:57.126 "bdev_lvol_set_read_only", 00:06:57.126 "bdev_lvol_resize", 00:06:57.126 "bdev_lvol_decouple_parent", 00:06:57.126 "bdev_lvol_inflate", 00:06:57.126 "bdev_lvol_rename", 00:06:57.126 "bdev_lvol_clone_bdev", 00:06:57.126 "bdev_lvol_clone", 00:06:57.126 "bdev_lvol_snapshot", 00:06:57.126 "bdev_lvol_create", 00:06:57.126 "bdev_lvol_delete_lvstore", 00:06:57.126 "bdev_lvol_rename_lvstore", 00:06:57.126 "bdev_lvol_create_lvstore", 00:06:57.126 "bdev_raid_set_options", 00:06:57.126 "bdev_raid_remove_base_bdev", 00:06:57.126 "bdev_raid_add_base_bdev", 00:06:57.126 "bdev_raid_delete", 00:06:57.126 "bdev_raid_create", 00:06:57.126 "bdev_raid_get_bdevs", 00:06:57.126 "bdev_error_inject_error", 00:06:57.126 "bdev_error_delete", 00:06:57.126 "bdev_error_create", 00:06:57.126 "bdev_split_delete", 00:06:57.126 "bdev_split_create", 00:06:57.126 "bdev_delay_delete", 00:06:57.126 "bdev_delay_create", 00:06:57.126 "bdev_delay_update_latency", 00:06:57.126 "bdev_zone_block_delete", 00:06:57.126 "bdev_zone_block_create", 00:06:57.126 "blobfs_create", 00:06:57.126 "blobfs_detect", 00:06:57.126 "blobfs_set_cache_size", 00:06:57.126 "bdev_xnvme_delete", 00:06:57.126 "bdev_xnvme_create", 00:06:57.126 "bdev_aio_delete", 00:06:57.126 "bdev_aio_rescan", 00:06:57.126 "bdev_aio_create", 00:06:57.126 "bdev_ftl_set_property", 00:06:57.126 "bdev_ftl_get_properties", 00:06:57.126 "bdev_ftl_get_stats", 00:06:57.126 "bdev_ftl_unmap", 00:06:57.126 "bdev_ftl_unload", 00:06:57.126 "bdev_ftl_delete", 00:06:57.126 "bdev_ftl_load", 00:06:57.126 "bdev_ftl_create", 00:06:57.126 "bdev_virtio_attach_controller", 00:06:57.126 "bdev_virtio_scsi_get_devices", 00:06:57.126 "bdev_virtio_detach_controller", 00:06:57.126 "bdev_virtio_blk_set_hotplug", 00:06:57.126 "bdev_iscsi_delete", 00:06:57.126 "bdev_iscsi_create", 00:06:57.126 "bdev_iscsi_set_options", 00:06:57.126 "accel_error_inject_error", 00:06:57.126 "ioat_scan_accel_module", 00:06:57.126 "dsa_scan_accel_module", 00:06:57.126 "iaa_scan_accel_module", 00:06:57.126 "keyring_file_remove_key", 00:06:57.126 "keyring_file_add_key", 00:06:57.126 "keyring_linux_set_options", 00:06:57.126 "fsdev_aio_delete", 00:06:57.126 "fsdev_aio_create", 00:06:57.126 "iscsi_get_histogram", 00:06:57.126 "iscsi_enable_histogram", 00:06:57.126 "iscsi_set_options", 00:06:57.126 "iscsi_get_auth_groups", 00:06:57.126 "iscsi_auth_group_remove_secret", 00:06:57.126 "iscsi_auth_group_add_secret", 00:06:57.126 "iscsi_delete_auth_group", 00:06:57.126 "iscsi_create_auth_group", 00:06:57.126 "iscsi_set_discovery_auth", 00:06:57.126 "iscsi_get_options", 00:06:57.126 "iscsi_target_node_request_logout", 00:06:57.126 "iscsi_target_node_set_redirect", 00:06:57.126 "iscsi_target_node_set_auth", 00:06:57.126 "iscsi_target_node_add_lun", 00:06:57.126 "iscsi_get_stats", 00:06:57.126 "iscsi_get_connections", 00:06:57.126 "iscsi_portal_group_set_auth", 00:06:57.126 "iscsi_start_portal_group", 00:06:57.126 "iscsi_delete_portal_group", 00:06:57.126 "iscsi_create_portal_group", 00:06:57.126 "iscsi_get_portal_groups", 00:06:57.126 "iscsi_delete_target_node", 00:06:57.126 "iscsi_target_node_remove_pg_ig_maps", 00:06:57.126 "iscsi_target_node_add_pg_ig_maps", 00:06:57.126 "iscsi_create_target_node", 00:06:57.126 "iscsi_get_target_nodes", 00:06:57.126 "iscsi_delete_initiator_group", 00:06:57.126 "iscsi_initiator_group_remove_initiators", 00:06:57.126 "iscsi_initiator_group_add_initiators", 00:06:57.126 "iscsi_create_initiator_group", 00:06:57.126 "iscsi_get_initiator_groups", 00:06:57.126 "nvmf_set_crdt", 00:06:57.126 "nvmf_set_config", 00:06:57.126 "nvmf_set_max_subsystems", 00:06:57.126 "nvmf_stop_mdns_prr", 00:06:57.126 "nvmf_publish_mdns_prr", 00:06:57.126 "nvmf_subsystem_get_listeners", 00:06:57.126 "nvmf_subsystem_get_qpairs", 00:06:57.126 "nvmf_subsystem_get_controllers", 00:06:57.126 "nvmf_get_stats", 00:06:57.126 "nvmf_get_transports", 00:06:57.126 "nvmf_create_transport", 00:06:57.126 "nvmf_get_targets", 00:06:57.126 "nvmf_delete_target", 00:06:57.126 "nvmf_create_target", 00:06:57.126 "nvmf_subsystem_allow_any_host", 00:06:57.126 "nvmf_subsystem_set_keys", 00:06:57.126 "nvmf_subsystem_remove_host", 00:06:57.126 "nvmf_subsystem_add_host", 00:06:57.126 "nvmf_ns_remove_host", 00:06:57.126 "nvmf_ns_add_host", 00:06:57.126 "nvmf_subsystem_remove_ns", 00:06:57.126 "nvmf_subsystem_set_ns_ana_group", 00:06:57.126 "nvmf_subsystem_add_ns", 00:06:57.126 "nvmf_subsystem_listener_set_ana_state", 00:06:57.126 "nvmf_discovery_get_referrals", 00:06:57.126 "nvmf_discovery_remove_referral", 00:06:57.126 "nvmf_discovery_add_referral", 00:06:57.126 "nvmf_subsystem_remove_listener", 00:06:57.126 "nvmf_subsystem_add_listener", 00:06:57.126 "nvmf_delete_subsystem", 00:06:57.126 "nvmf_create_subsystem", 00:06:57.126 "nvmf_get_subsystems", 00:06:57.126 "env_dpdk_get_mem_stats", 00:06:57.126 "nbd_get_disks", 00:06:57.126 "nbd_stop_disk", 00:06:57.126 "nbd_start_disk", 00:06:57.126 "ublk_recover_disk", 00:06:57.126 "ublk_get_disks", 00:06:57.126 "ublk_stop_disk", 00:06:57.126 "ublk_start_disk", 00:06:57.126 "ublk_destroy_target", 00:06:57.126 "ublk_create_target", 00:06:57.126 "virtio_blk_create_transport", 00:06:57.126 "virtio_blk_get_transports", 00:06:57.126 "vhost_controller_set_coalescing", 00:06:57.126 "vhost_get_controllers", 00:06:57.126 "vhost_delete_controller", 00:06:57.126 "vhost_create_blk_controller", 00:06:57.126 "vhost_scsi_controller_remove_target", 00:06:57.126 "vhost_scsi_controller_add_target", 00:06:57.126 "vhost_start_scsi_controller", 00:06:57.126 "vhost_create_scsi_controller", 00:06:57.126 "thread_set_cpumask", 00:06:57.126 "scheduler_set_options", 00:06:57.126 "framework_get_governor", 00:06:57.126 "framework_get_scheduler", 00:06:57.126 "framework_set_scheduler", 00:06:57.126 "framework_get_reactors", 00:06:57.126 "thread_get_io_channels", 00:06:57.126 "thread_get_pollers", 00:06:57.126 "thread_get_stats", 00:06:57.126 "framework_monitor_context_switch", 00:06:57.126 "spdk_kill_instance", 00:06:57.126 "log_enable_timestamps", 00:06:57.126 "log_get_flags", 00:06:57.126 "log_clear_flag", 00:06:57.126 "log_set_flag", 00:06:57.126 "log_get_level", 00:06:57.126 "log_set_level", 00:06:57.126 "log_get_print_level", 00:06:57.126 "log_set_print_level", 00:06:57.126 "framework_enable_cpumask_locks", 00:06:57.126 "framework_disable_cpumask_locks", 00:06:57.126 "framework_wait_init", 00:06:57.126 "framework_start_init", 00:06:57.126 "scsi_get_devices", 00:06:57.126 "bdev_get_histogram", 00:06:57.126 "bdev_enable_histogram", 00:06:57.126 "bdev_set_qos_limit", 00:06:57.126 "bdev_set_qd_sampling_period", 00:06:57.126 "bdev_get_bdevs", 00:06:57.126 "bdev_reset_iostat", 00:06:57.126 "bdev_get_iostat", 00:06:57.126 "bdev_examine", 00:06:57.126 "bdev_wait_for_examine", 00:06:57.126 "bdev_set_options", 00:06:57.126 "accel_get_stats", 00:06:57.126 "accel_set_options", 00:06:57.126 "accel_set_driver", 00:06:57.126 "accel_crypto_key_destroy", 00:06:57.126 "accel_crypto_keys_get", 00:06:57.126 "accel_crypto_key_create", 00:06:57.126 "accel_assign_opc", 00:06:57.126 "accel_get_module_info", 00:06:57.126 "accel_get_opc_assignments", 00:06:57.126 "vmd_rescan", 00:06:57.126 "vmd_remove_device", 00:06:57.126 "vmd_enable", 00:06:57.126 "sock_get_default_impl", 00:06:57.126 "sock_set_default_impl", 00:06:57.126 "sock_impl_set_options", 00:06:57.126 "sock_impl_get_options", 00:06:57.126 "iobuf_get_stats", 00:06:57.126 "iobuf_set_options", 00:06:57.127 "keyring_get_keys", 00:06:57.127 "framework_get_pci_devices", 00:06:57.127 "framework_get_config", 00:06:57.127 "framework_get_subsystems", 00:06:57.127 "fsdev_set_opts", 00:06:57.127 "fsdev_get_opts", 00:06:57.127 "trace_get_info", 00:06:57.127 "trace_get_tpoint_group_mask", 00:06:57.127 "trace_disable_tpoint_group", 00:06:57.127 "trace_enable_tpoint_group", 00:06:57.127 "trace_clear_tpoint_mask", 00:06:57.127 "trace_set_tpoint_mask", 00:06:57.127 "notify_get_notifications", 00:06:57.127 "notify_get_types", 00:06:57.127 "spdk_get_version", 00:06:57.127 "rpc_get_methods" 00:06:57.127 ] 00:06:57.127 15:06:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.127 15:06:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:57.127 15:06:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71445 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 71445 ']' 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 71445 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71445 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.127 killing process with pid 71445 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71445' 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 71445 00:06:57.127 15:06:55 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 71445 00:06:57.695 00:06:57.695 real 0m1.900s 00:06:57.695 user 0m3.083s 00:06:57.695 sys 0m0.621s 00:06:57.695 15:06:55 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.695 ************************************ 00:06:57.695 END TEST spdkcli_tcp 00:06:57.695 ************************************ 00:06:57.695 15:06:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.695 15:06:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:57.695 15:06:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.695 15:06:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.695 15:06:56 -- common/autotest_common.sh@10 -- # set +x 00:06:57.695 ************************************ 00:06:57.695 START TEST dpdk_mem_utility 00:06:57.695 ************************************ 00:06:57.695 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:57.695 * Looking for test storage... 00:06:57.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:57.695 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.695 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.695 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:57.954 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:57.954 15:06:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.954 15:06:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.954 15:06:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.954 15:06:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.954 15:06:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.954 15:06:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.954 15:06:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.954 15:06:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.955 15:06:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:57.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.955 --rc genhtml_branch_coverage=1 00:06:57.955 --rc genhtml_function_coverage=1 00:06:57.955 --rc genhtml_legend=1 00:06:57.955 --rc geninfo_all_blocks=1 00:06:57.955 --rc geninfo_unexecuted_blocks=1 00:06:57.955 00:06:57.955 ' 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:57.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.955 --rc genhtml_branch_coverage=1 00:06:57.955 --rc genhtml_function_coverage=1 00:06:57.955 --rc genhtml_legend=1 00:06:57.955 --rc geninfo_all_blocks=1 00:06:57.955 --rc geninfo_unexecuted_blocks=1 00:06:57.955 00:06:57.955 ' 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:57.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.955 --rc genhtml_branch_coverage=1 00:06:57.955 --rc genhtml_function_coverage=1 00:06:57.955 --rc genhtml_legend=1 00:06:57.955 --rc geninfo_all_blocks=1 00:06:57.955 --rc geninfo_unexecuted_blocks=1 00:06:57.955 00:06:57.955 ' 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:57.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.955 --rc genhtml_branch_coverage=1 00:06:57.955 --rc genhtml_function_coverage=1 00:06:57.955 --rc genhtml_legend=1 00:06:57.955 --rc geninfo_all_blocks=1 00:06:57.955 --rc geninfo_unexecuted_blocks=1 00:06:57.955 00:06:57.955 ' 00:06:57.955 15:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:57.955 15:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71545 00:06:57.955 15:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:57.955 15:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71545 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71545 ']' 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.955 15:06:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:57.955 [2024-10-01 15:06:56.374768] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:57.955 [2024-10-01 15:06:56.374904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71545 ] 00:06:58.213 [2024-10-01 15:06:56.543166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.213 [2024-10-01 15:06:56.593272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.783 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.783 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:58.783 15:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:58.783 15:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:58.783 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.783 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:58.783 { 00:06:58.783 "filename": "/tmp/spdk_mem_dump.txt" 00:06:58.783 } 00:06:58.783 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.783 15:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:58.783 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:58.783 1 heaps totaling size 860.000000 MiB 00:06:58.783 size: 860.000000 MiB heap id: 0 00:06:58.783 end heaps---------- 00:06:58.783 9 mempools totaling size 642.649841 MiB 00:06:58.783 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:58.783 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:58.783 size: 92.545471 MiB name: bdev_io_71545 00:06:58.783 size: 51.011292 MiB name: evtpool_71545 00:06:58.783 size: 50.003479 MiB name: msgpool_71545 00:06:58.783 size: 36.509338 MiB name: fsdev_io_71545 00:06:58.783 size: 21.763794 MiB name: PDU_Pool 00:06:58.783 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:58.783 size: 0.026123 MiB name: Session_Pool 00:06:58.783 end mempools------- 00:06:58.783 6 memzones totaling size 4.142822 MiB 00:06:58.783 size: 1.000366 MiB name: RG_ring_0_71545 00:06:58.783 size: 1.000366 MiB name: RG_ring_1_71545 00:06:58.783 size: 1.000366 MiB name: RG_ring_4_71545 00:06:58.783 size: 1.000366 MiB name: RG_ring_5_71545 00:06:58.783 size: 0.125366 MiB name: RG_ring_2_71545 00:06:58.783 size: 0.015991 MiB name: RG_ring_3_71545 00:06:58.783 end memzones------- 00:06:58.783 15:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:58.783 heap id: 0 total size: 860.000000 MiB number of busy elements: 305 number of free elements: 16 00:06:58.783 list of free elements. size: 13.936890 MiB 00:06:58.783 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:58.783 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:58.783 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:58.783 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:58.783 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:58.783 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:58.783 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:58.783 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:58.783 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:58.783 element at address: 0x20001d800000 with size: 0.568237 MiB 00:06:58.783 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:58.783 element at address: 0x200003e00000 with size: 0.488281 MiB 00:06:58.783 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:58.783 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:58.783 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:58.783 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:58.783 list of standard malloc elements. size: 199.266418 MiB 00:06:58.783 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:58.783 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:58.783 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:58.783 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:58.783 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:58.783 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:58.783 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:58.783 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:58.783 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:58.783 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:58.783 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:58.783 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:58.783 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:58.784 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:58.784 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:58.785 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:58.785 list of memzone associated elements. size: 646.796692 MiB 00:06:58.785 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:58.785 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:58.785 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:58.785 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:58.785 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:58.785 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71545_0 00:06:58.785 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:58.785 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71545_0 00:06:58.785 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:58.785 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71545_0 00:06:58.785 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:58.785 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71545_0 00:06:58.785 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:58.785 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:58.785 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:58.785 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:58.786 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:58.786 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71545 00:06:58.786 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:58.786 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71545 00:06:58.786 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:58.786 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71545 00:06:58.786 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:58.786 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:58.786 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:58.786 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:58.786 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:58.786 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:58.786 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:58.786 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:58.786 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:58.786 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71545 00:06:58.786 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:58.786 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71545 00:06:58.786 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:58.786 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71545 00:06:58.786 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:58.786 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71545 00:06:58.786 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:58.786 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71545 00:06:58.786 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:58.786 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71545 00:06:58.786 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:58.786 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:58.786 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:58.786 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:58.786 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:58.786 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:58.786 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:58.786 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71545 00:06:58.786 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:58.786 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:58.786 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:58.786 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:58.786 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:58.786 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71545 00:06:58.786 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:58.786 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:58.786 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:58.786 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71545 00:06:58.786 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:58.786 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71545 00:06:58.786 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:58.786 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71545 00:06:58.786 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:58.786 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:58.786 15:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:58.786 15:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71545 00:06:58.786 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71545 ']' 00:06:58.786 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71545 00:06:58.786 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:59.045 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.045 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71545 00:06:59.045 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.045 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.045 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71545' 00:06:59.045 killing process with pid 71545 00:06:59.045 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71545 00:06:59.045 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71545 00:06:59.304 00:06:59.304 real 0m1.742s 00:06:59.304 user 0m1.629s 00:06:59.304 sys 0m0.586s 00:06:59.304 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.304 15:06:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.304 ************************************ 00:06:59.304 END TEST dpdk_mem_utility 00:06:59.304 ************************************ 00:06:59.304 15:06:57 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:59.304 15:06:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.304 15:06:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.304 15:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:59.563 ************************************ 00:06:59.563 START TEST event 00:06:59.563 ************************************ 00:06:59.563 15:06:57 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:59.563 * Looking for test storage... 00:06:59.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:59.563 15:06:57 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:59.563 15:06:58 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:59.563 15:06:58 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:59.563 15:06:58 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:59.563 15:06:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.563 15:06:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.563 15:06:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.563 15:06:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.563 15:06:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.563 15:06:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.563 15:06:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.563 15:06:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.563 15:06:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.563 15:06:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.563 15:06:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.563 15:06:58 event -- scripts/common.sh@344 -- # case "$op" in 00:06:59.563 15:06:58 event -- scripts/common.sh@345 -- # : 1 00:06:59.563 15:06:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.563 15:06:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.563 15:06:58 event -- scripts/common.sh@365 -- # decimal 1 00:06:59.563 15:06:58 event -- scripts/common.sh@353 -- # local d=1 00:06:59.563 15:06:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.563 15:06:58 event -- scripts/common.sh@355 -- # echo 1 00:06:59.563 15:06:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.563 15:06:58 event -- scripts/common.sh@366 -- # decimal 2 00:06:59.563 15:06:58 event -- scripts/common.sh@353 -- # local d=2 00:06:59.563 15:06:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.563 15:06:58 event -- scripts/common.sh@355 -- # echo 2 00:06:59.563 15:06:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.563 15:06:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.563 15:06:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.563 15:06:58 event -- scripts/common.sh@368 -- # return 0 00:06:59.563 15:06:58 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.563 15:06:58 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.563 --rc genhtml_branch_coverage=1 00:06:59.563 --rc genhtml_function_coverage=1 00:06:59.563 --rc genhtml_legend=1 00:06:59.563 --rc geninfo_all_blocks=1 00:06:59.563 --rc geninfo_unexecuted_blocks=1 00:06:59.563 00:06:59.563 ' 00:06:59.563 15:06:58 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.563 --rc genhtml_branch_coverage=1 00:06:59.563 --rc genhtml_function_coverage=1 00:06:59.563 --rc genhtml_legend=1 00:06:59.563 --rc geninfo_all_blocks=1 00:06:59.563 --rc geninfo_unexecuted_blocks=1 00:06:59.563 00:06:59.563 ' 00:06:59.563 15:06:58 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.563 --rc genhtml_branch_coverage=1 00:06:59.563 --rc genhtml_function_coverage=1 00:06:59.563 --rc genhtml_legend=1 00:06:59.564 --rc geninfo_all_blocks=1 00:06:59.564 --rc geninfo_unexecuted_blocks=1 00:06:59.564 00:06:59.564 ' 00:06:59.564 15:06:58 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:59.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.564 --rc genhtml_branch_coverage=1 00:06:59.564 --rc genhtml_function_coverage=1 00:06:59.564 --rc genhtml_legend=1 00:06:59.564 --rc geninfo_all_blocks=1 00:06:59.564 --rc geninfo_unexecuted_blocks=1 00:06:59.564 00:06:59.564 ' 00:06:59.564 15:06:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:59.564 15:06:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:59.564 15:06:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:59.564 15:06:58 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:59.564 15:06:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.564 15:06:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.564 ************************************ 00:06:59.564 START TEST event_perf 00:06:59.564 ************************************ 00:06:59.564 15:06:58 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:59.824 Running I/O for 1 seconds...[2024-10-01 15:06:58.155742] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:06:59.824 [2024-10-01 15:06:58.155878] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71631 ] 00:06:59.824 [2024-10-01 15:06:58.325369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.084 [2024-10-01 15:06:58.376523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.084 [2024-10-01 15:06:58.376729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.084 [2024-10-01 15:06:58.376783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.084 Running I/O for 1 seconds...[2024-10-01 15:06:58.376923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.029 00:07:01.029 lcore 0: 198276 00:07:01.029 lcore 1: 198275 00:07:01.029 lcore 2: 198274 00:07:01.029 lcore 3: 198275 00:07:01.029 done. 00:07:01.029 00:07:01.029 real 0m1.368s 00:07:01.029 user 0m4.110s 00:07:01.029 sys 0m0.132s 00:07:01.029 15:06:59 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.029 ************************************ 00:07:01.029 15:06:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.029 END TEST event_perf 00:07:01.029 ************************************ 00:07:01.029 15:06:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:01.029 15:06:59 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:01.029 15:06:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.029 15:06:59 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.030 ************************************ 00:07:01.030 START TEST event_reactor 00:07:01.030 ************************************ 00:07:01.030 15:06:59 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:01.288 [2024-10-01 15:06:59.601997] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:01.288 [2024-10-01 15:06:59.602135] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71667 ] 00:07:01.288 [2024-10-01 15:06:59.773165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.288 [2024-10-01 15:06:59.826991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.665 test_start 00:07:02.665 oneshot 00:07:02.665 tick 100 00:07:02.665 tick 100 00:07:02.665 tick 250 00:07:02.665 tick 100 00:07:02.665 tick 100 00:07:02.665 tick 100 00:07:02.665 tick 250 00:07:02.665 tick 500 00:07:02.665 tick 100 00:07:02.665 tick 100 00:07:02.665 tick 250 00:07:02.665 tick 100 00:07:02.665 tick 100 00:07:02.665 test_end 00:07:02.665 00:07:02.665 real 0m1.374s 00:07:02.665 user 0m1.150s 00:07:02.665 sys 0m0.116s 00:07:02.665 15:07:00 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.665 15:07:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:02.665 ************************************ 00:07:02.665 END TEST event_reactor 00:07:02.665 ************************************ 00:07:02.665 15:07:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.665 15:07:00 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:02.665 15:07:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.665 15:07:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.665 ************************************ 00:07:02.665 START TEST event_reactor_perf 00:07:02.665 ************************************ 00:07:02.665 15:07:00 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.665 [2024-10-01 15:07:01.039669] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:02.665 [2024-10-01 15:07:01.039822] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71709 ] 00:07:02.665 [2024-10-01 15:07:01.210042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.924 [2024-10-01 15:07:01.262940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.860 test_start 00:07:03.860 test_end 00:07:03.860 Performance: 375766 events per second 00:07:03.860 00:07:03.860 real 0m1.365s 00:07:03.860 user 0m1.143s 00:07:03.860 sys 0m0.114s 00:07:03.860 15:07:02 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.860 15:07:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.860 ************************************ 00:07:03.860 END TEST event_reactor_perf 00:07:03.860 ************************************ 00:07:04.118 15:07:02 event -- event/event.sh@49 -- # uname -s 00:07:04.118 15:07:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:04.118 15:07:02 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:04.118 15:07:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.118 15:07:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.118 15:07:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.118 ************************************ 00:07:04.118 START TEST event_scheduler 00:07:04.118 ************************************ 00:07:04.118 15:07:02 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:04.118 * Looking for test storage... 00:07:04.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:04.118 15:07:02 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:04.118 15:07:02 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:04.118 15:07:02 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:04.119 15:07:02 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.119 15:07:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:04.119 15:07:02 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.119 15:07:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:04.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.119 --rc genhtml_branch_coverage=1 00:07:04.119 --rc genhtml_function_coverage=1 00:07:04.119 --rc genhtml_legend=1 00:07:04.119 --rc geninfo_all_blocks=1 00:07:04.119 --rc geninfo_unexecuted_blocks=1 00:07:04.119 00:07:04.119 ' 00:07:04.119 15:07:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:04.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.119 --rc genhtml_branch_coverage=1 00:07:04.119 --rc genhtml_function_coverage=1 00:07:04.119 --rc genhtml_legend=1 00:07:04.119 --rc geninfo_all_blocks=1 00:07:04.119 --rc geninfo_unexecuted_blocks=1 00:07:04.119 00:07:04.119 ' 00:07:04.119 15:07:02 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:04.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.119 --rc genhtml_branch_coverage=1 00:07:04.119 --rc genhtml_function_coverage=1 00:07:04.119 --rc genhtml_legend=1 00:07:04.119 --rc geninfo_all_blocks=1 00:07:04.119 --rc geninfo_unexecuted_blocks=1 00:07:04.119 00:07:04.119 ' 00:07:04.119 15:07:02 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:04.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.119 --rc genhtml_branch_coverage=1 00:07:04.119 --rc genhtml_function_coverage=1 00:07:04.119 --rc genhtml_legend=1 00:07:04.119 --rc geninfo_all_blocks=1 00:07:04.119 --rc geninfo_unexecuted_blocks=1 00:07:04.119 00:07:04.119 ' 00:07:04.119 15:07:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:04.393 15:07:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71774 00:07:04.393 15:07:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:04.393 15:07:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.394 15:07:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71774 00:07:04.394 15:07:02 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71774 ']' 00:07:04.394 15:07:02 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.394 15:07:02 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.394 15:07:02 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.394 15:07:02 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.394 15:07:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.394 [2024-10-01 15:07:02.754282] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:04.394 [2024-10-01 15:07:02.754420] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71774 ] 00:07:04.394 [2024-10-01 15:07:02.924395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.651 [2024-10-01 15:07:02.976727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.651 [2024-10-01 15:07:02.976914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.651 [2024-10-01 15:07:02.976933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.651 [2024-10-01 15:07:02.977044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:05.216 15:07:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.216 POWER: Cannot set governor of lcore 0 to userspace 00:07:05.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.216 POWER: Cannot set governor of lcore 0 to performance 00:07:05.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.216 POWER: Cannot set governor of lcore 0 to userspace 00:07:05.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:05.216 POWER: Cannot set governor of lcore 0 to userspace 00:07:05.216 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:05.216 POWER: Unable to set Power Management Environment for lcore 0 00:07:05.216 [2024-10-01 15:07:03.594020] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:05.216 [2024-10-01 15:07:03.594060] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:05.216 [2024-10-01 15:07:03.594087] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:05.216 [2024-10-01 15:07:03.594190] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:05.216 [2024-10-01 15:07:03.594202] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:05.216 [2024-10-01 15:07:03.594215] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.216 15:07:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.216 [2024-10-01 15:07:03.666385] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.216 15:07:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.216 15:07:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.216 ************************************ 00:07:05.216 START TEST scheduler_create_thread 00:07:05.216 ************************************ 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.216 2 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.216 3 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.216 4 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.216 5 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.216 6 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.216 7 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.216 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.475 8 00:07:05.475 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.475 15:07:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:05.475 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.475 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.475 9 00:07:05.475 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.475 15:07:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:05.475 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.475 15:07:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.850 10 00:07:06.850 15:07:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.850 15:07:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:06.850 15:07:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.850 15:07:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.418 15:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.418 15:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:07.418 15:07:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:07.418 15:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.418 15:07:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.355 15:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.355 15:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:08.355 15:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.355 15:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.924 15:07:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.924 15:07:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:08.924 15:07:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:08.924 15:07:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.924 15:07:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.492 ************************************ 00:07:09.492 END TEST scheduler_create_thread 00:07:09.492 ************************************ 00:07:09.492 15:07:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.492 00:07:09.492 real 0m4.201s 00:07:09.492 user 0m0.028s 00:07:09.492 sys 0m0.010s 00:07:09.492 15:07:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.492 15:07:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.492 15:07:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:09.492 15:07:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71774 00:07:09.492 15:07:07 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71774 ']' 00:07:09.492 15:07:07 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71774 00:07:09.492 15:07:07 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:09.492 15:07:07 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.492 15:07:07 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71774 00:07:09.492 15:07:07 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:09.492 killing process with pid 71774 00:07:09.492 15:07:07 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:09.492 15:07:07 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71774' 00:07:09.492 15:07:07 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71774 00:07:09.492 15:07:08 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71774 00:07:09.750 [2024-10-01 15:07:08.163183] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:10.008 00:07:10.008 real 0m6.049s 00:07:10.008 user 0m13.396s 00:07:10.008 sys 0m0.552s 00:07:10.008 ************************************ 00:07:10.008 END TEST event_scheduler 00:07:10.008 ************************************ 00:07:10.008 15:07:08 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.008 15:07:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.008 15:07:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:10.008 15:07:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:10.008 15:07:08 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.008 15:07:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.008 15:07:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.266 ************************************ 00:07:10.267 START TEST app_repeat 00:07:10.267 ************************************ 00:07:10.267 15:07:08 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:10.267 Process app_repeat pid: 71891 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71891 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71891' 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.267 spdk_app_start Round 0 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:10.267 15:07:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71891 /var/tmp/spdk-nbd.sock 00:07:10.267 15:07:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71891 ']' 00:07:10.267 15:07:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.267 15:07:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.267 15:07:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.267 15:07:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.267 15:07:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.267 [2024-10-01 15:07:08.623954] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:10.267 [2024-10-01 15:07:08.624099] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71891 ] 00:07:10.267 [2024-10-01 15:07:08.792813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.525 [2024-10-01 15:07:08.849020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.525 [2024-10-01 15:07:08.849119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.179 15:07:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.179 15:07:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:11.179 15:07:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.438 Malloc0 00:07:11.438 15:07:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.696 Malloc1 00:07:11.696 15:07:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.696 /dev/nbd0 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.696 15:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.696 15:07:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:11.696 15:07:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:11.696 15:07:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:11.696 15:07:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:11.696 15:07:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:11.955 15:07:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:11.955 15:07:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:11.955 15:07:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:11.955 15:07:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.955 1+0 records in 00:07:11.955 1+0 records out 00:07:11.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371592 s, 11.0 MB/s 00:07:11.955 15:07:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.955 15:07:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:11.955 15:07:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.955 15:07:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:11.955 15:07:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:11.955 15:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.955 15:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.955 15:07:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:12.213 /dev/nbd1 00:07:12.213 15:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.213 15:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.213 1+0 records in 00:07:12.213 1+0 records out 00:07:12.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459034 s, 8.9 MB/s 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:12.213 15:07:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:12.213 15:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.213 15:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.213 15:07:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.213 15:07:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.213 15:07:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.472 { 00:07:12.472 "nbd_device": "/dev/nbd0", 00:07:12.472 "bdev_name": "Malloc0" 00:07:12.472 }, 00:07:12.472 { 00:07:12.472 "nbd_device": "/dev/nbd1", 00:07:12.472 "bdev_name": "Malloc1" 00:07:12.472 } 00:07:12.472 ]' 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.472 { 00:07:12.472 "nbd_device": "/dev/nbd0", 00:07:12.472 "bdev_name": "Malloc0" 00:07:12.472 }, 00:07:12.472 { 00:07:12.472 "nbd_device": "/dev/nbd1", 00:07:12.472 "bdev_name": "Malloc1" 00:07:12.472 } 00:07:12.472 ]' 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.472 /dev/nbd1' 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.472 /dev/nbd1' 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.472 256+0 records in 00:07:12.472 256+0 records out 00:07:12.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00741729 s, 141 MB/s 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.472 256+0 records in 00:07:12.472 256+0 records out 00:07:12.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306547 s, 34.2 MB/s 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.472 256+0 records in 00:07:12.472 256+0 records out 00:07:12.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0367584 s, 28.5 MB/s 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.472 15:07:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.472 15:07:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.472 15:07:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.472 15:07:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.472 15:07:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.472 15:07:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.472 15:07:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.472 15:07:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.472 15:07:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.730 15:07:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.730 15:07:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.730 15:07:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.730 15:07:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.730 15:07:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.730 15:07:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.730 15:07:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.730 15:07:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.730 15:07:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.730 15:07:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.996 15:07:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.256 15:07:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.256 15:07:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.525 15:07:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:13.783 [2024-10-01 15:07:12.148915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.783 [2024-10-01 15:07:12.200727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.783 [2024-10-01 15:07:12.200729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.783 [2024-10-01 15:07:12.243498] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:13.783 [2024-10-01 15:07:12.243798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:17.130 15:07:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.130 spdk_app_start Round 1 00:07:17.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.130 15:07:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:17.130 15:07:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71891 /var/tmp/spdk-nbd.sock 00:07:17.130 15:07:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71891 ']' 00:07:17.130 15:07:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.130 15:07:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.130 15:07:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.130 15:07:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.130 15:07:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.130 15:07:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.130 15:07:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:17.130 15:07:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.130 Malloc0 00:07:17.130 15:07:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.130 Malloc1 00:07:17.130 15:07:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.130 15:07:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.389 /dev/nbd0 00:07:17.389 15:07:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.389 15:07:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.389 1+0 records in 00:07:17.389 1+0 records out 00:07:17.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378357 s, 10.8 MB/s 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.389 15:07:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:17.389 15:07:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.389 15:07:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.389 15:07:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:17.647 /dev/nbd1 00:07:17.647 15:07:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.647 15:07:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.647 15:07:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:17.647 15:07:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:17.647 15:07:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.647 15:07:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.647 15:07:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:17.647 15:07:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:17.647 15:07:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.647 15:07:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.647 15:07:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.647 1+0 records in 00:07:17.647 1+0 records out 00:07:17.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339649 s, 12.1 MB/s 00:07:17.648 15:07:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.648 15:07:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:17.648 15:07:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.648 15:07:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.648 15:07:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:17.648 15:07:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.648 15:07:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.648 15:07:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.648 15:07:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.648 15:07:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.905 15:07:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.905 { 00:07:17.905 "nbd_device": "/dev/nbd0", 00:07:17.905 "bdev_name": "Malloc0" 00:07:17.905 }, 00:07:17.905 { 00:07:17.905 "nbd_device": "/dev/nbd1", 00:07:17.905 "bdev_name": "Malloc1" 00:07:17.905 } 00:07:17.905 ]' 00:07:17.905 15:07:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.905 15:07:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.905 { 00:07:17.905 "nbd_device": "/dev/nbd0", 00:07:17.905 "bdev_name": "Malloc0" 00:07:17.905 }, 00:07:17.905 { 00:07:17.905 "nbd_device": "/dev/nbd1", 00:07:17.905 "bdev_name": "Malloc1" 00:07:17.905 } 00:07:17.905 ]' 00:07:17.905 15:07:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.905 /dev/nbd1' 00:07:17.905 15:07:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.905 15:07:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.905 /dev/nbd1' 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:17.906 15:07:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.164 256+0 records in 00:07:18.164 256+0 records out 00:07:18.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114006 s, 92.0 MB/s 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.164 256+0 records in 00:07:18.164 256+0 records out 00:07:18.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261638 s, 40.1 MB/s 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.164 256+0 records in 00:07:18.164 256+0 records out 00:07:18.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265724 s, 39.5 MB/s 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.164 15:07:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.422 15:07:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.422 15:07:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.422 15:07:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.422 15:07:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.422 15:07:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.422 15:07:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.422 15:07:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.422 15:07:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.422 15:07:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.422 15:07:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.682 15:07:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.682 15:07:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.682 15:07:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.682 15:07:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.682 15:07:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.682 15:07:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.682 15:07:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.682 15:07:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.682 15:07:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.682 15:07:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.682 15:07:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.682 15:07:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.946 15:07:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.946 15:07:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.205 15:07:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.205 [2024-10-01 15:07:17.660388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.205 [2024-10-01 15:07:17.702658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.205 [2024-10-01 15:07:17.702682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.205 [2024-10-01 15:07:17.745056] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.205 [2024-10-01 15:07:17.745125] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.490 spdk_app_start Round 2 00:07:22.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.490 15:07:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:22.490 15:07:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:22.490 15:07:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71891 /var/tmp/spdk-nbd.sock 00:07:22.490 15:07:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71891 ']' 00:07:22.490 15:07:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.490 15:07:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.490 15:07:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.490 15:07:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.490 15:07:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.490 15:07:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.490 15:07:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:22.490 15:07:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.490 Malloc0 00:07:22.490 15:07:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.747 Malloc1 00:07:22.747 15:07:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.747 15:07:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:23.015 /dev/nbd0 00:07:23.015 15:07:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.015 15:07:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.015 1+0 records in 00:07:23.015 1+0 records out 00:07:23.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206469 s, 19.8 MB/s 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.015 15:07:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:23.015 15:07:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.015 15:07:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.015 15:07:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:23.273 /dev/nbd1 00:07:23.273 15:07:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:23.273 15:07:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.273 1+0 records in 00:07:23.273 1+0 records out 00:07:23.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456195 s, 9.0 MB/s 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.273 15:07:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:23.273 15:07:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.273 15:07:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.273 15:07:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.274 15:07:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.274 15:07:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:23.531 { 00:07:23.531 "nbd_device": "/dev/nbd0", 00:07:23.531 "bdev_name": "Malloc0" 00:07:23.531 }, 00:07:23.531 { 00:07:23.531 "nbd_device": "/dev/nbd1", 00:07:23.531 "bdev_name": "Malloc1" 00:07:23.531 } 00:07:23.531 ]' 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:23.531 { 00:07:23.531 "nbd_device": "/dev/nbd0", 00:07:23.531 "bdev_name": "Malloc0" 00:07:23.531 }, 00:07:23.531 { 00:07:23.531 "nbd_device": "/dev/nbd1", 00:07:23.531 "bdev_name": "Malloc1" 00:07:23.531 } 00:07:23.531 ]' 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:23.531 /dev/nbd1' 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:23.531 /dev/nbd1' 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:23.531 15:07:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:23.531 256+0 records in 00:07:23.531 256+0 records out 00:07:23.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129829 s, 80.8 MB/s 00:07:23.531 15:07:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.531 15:07:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:23.531 256+0 records in 00:07:23.531 256+0 records out 00:07:23.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260251 s, 40.3 MB/s 00:07:23.531 15:07:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.531 15:07:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:23.789 256+0 records in 00:07:23.789 256+0 records out 00:07:23.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03107 s, 33.7 MB/s 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.789 15:07:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:24.047 15:07:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.047 15:07:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.047 15:07:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.047 15:07:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.047 15:07:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.047 15:07:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.047 15:07:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:24.047 15:07:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.047 15:07:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.047 15:07:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.305 15:07:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.306 15:07:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.306 15:07:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.306 15:07:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.565 15:07:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.565 15:07:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.565 15:07:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.565 15:07:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:24.565 15:07:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.565 15:07:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.565 15:07:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:24.565 15:07:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:24.565 15:07:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:24.565 15:07:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:24.823 15:07:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:24.823 [2024-10-01 15:07:23.291413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.823 [2024-10-01 15:07:23.337677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.823 [2024-10-01 15:07:23.337680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.082 [2024-10-01 15:07:23.380475] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:25.082 [2024-10-01 15:07:23.380542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:27.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:27.668 15:07:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71891 /var/tmp/spdk-nbd.sock 00:07:27.668 15:07:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71891 ']' 00:07:27.668 15:07:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:27.668 15:07:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.668 15:07:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:27.668 15:07:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.668 15:07:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:27.927 15:07:26 event.app_repeat -- event/event.sh@39 -- # killprocess 71891 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71891 ']' 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71891 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71891 00:07:27.927 killing process with pid 71891 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71891' 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71891 00:07:27.927 15:07:26 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71891 00:07:28.186 spdk_app_start is called in Round 0. 00:07:28.186 Shutdown signal received, stop current app iteration 00:07:28.186 Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 reinitialization... 00:07:28.186 spdk_app_start is called in Round 1. 00:07:28.186 Shutdown signal received, stop current app iteration 00:07:28.187 Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 reinitialization... 00:07:28.187 spdk_app_start is called in Round 2. 00:07:28.187 Shutdown signal received, stop current app iteration 00:07:28.187 Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 reinitialization... 00:07:28.187 spdk_app_start is called in Round 3. 00:07:28.187 Shutdown signal received, stop current app iteration 00:07:28.187 ************************************ 00:07:28.187 END TEST app_repeat 00:07:28.187 ************************************ 00:07:28.187 15:07:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:28.187 15:07:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:28.187 00:07:28.187 real 0m18.047s 00:07:28.187 user 0m39.442s 00:07:28.187 sys 0m3.244s 00:07:28.187 15:07:26 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.187 15:07:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:28.187 15:07:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:28.187 15:07:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:28.187 15:07:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.187 15:07:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.187 15:07:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.187 ************************************ 00:07:28.187 START TEST cpu_locks 00:07:28.187 ************************************ 00:07:28.187 15:07:26 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:28.447 * Looking for test storage... 00:07:28.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.447 15:07:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:28.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.447 --rc genhtml_branch_coverage=1 00:07:28.447 --rc genhtml_function_coverage=1 00:07:28.447 --rc genhtml_legend=1 00:07:28.447 --rc geninfo_all_blocks=1 00:07:28.447 --rc geninfo_unexecuted_blocks=1 00:07:28.447 00:07:28.447 ' 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:28.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.447 --rc genhtml_branch_coverage=1 00:07:28.447 --rc genhtml_function_coverage=1 00:07:28.447 --rc genhtml_legend=1 00:07:28.447 --rc geninfo_all_blocks=1 00:07:28.447 --rc geninfo_unexecuted_blocks=1 00:07:28.447 00:07:28.447 ' 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:28.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.447 --rc genhtml_branch_coverage=1 00:07:28.447 --rc genhtml_function_coverage=1 00:07:28.447 --rc genhtml_legend=1 00:07:28.447 --rc geninfo_all_blocks=1 00:07:28.447 --rc geninfo_unexecuted_blocks=1 00:07:28.447 00:07:28.447 ' 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:28.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.447 --rc genhtml_branch_coverage=1 00:07:28.447 --rc genhtml_function_coverage=1 00:07:28.447 --rc genhtml_legend=1 00:07:28.447 --rc geninfo_all_blocks=1 00:07:28.447 --rc geninfo_unexecuted_blocks=1 00:07:28.447 00:07:28.447 ' 00:07:28.447 15:07:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:28.447 15:07:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:28.447 15:07:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:28.447 15:07:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.447 15:07:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.447 ************************************ 00:07:28.447 START TEST default_locks 00:07:28.447 ************************************ 00:07:28.447 15:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:28.447 15:07:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72327 00:07:28.447 15:07:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.447 15:07:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72327 00:07:28.447 15:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72327 ']' 00:07:28.447 15:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.447 15:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.447 15:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.447 15:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.447 15:07:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.707 [2024-10-01 15:07:27.032558] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:28.707 [2024-10-01 15:07:27.032708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72327 ] 00:07:28.707 [2024-10-01 15:07:27.201851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.707 [2024-10-01 15:07:27.248817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.645 15:07:27 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.645 15:07:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:29.645 15:07:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72327 00:07:29.645 15:07:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72327 00:07:29.645 15:07:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72327 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 72327 ']' 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 72327 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72327 00:07:29.905 killing process with pid 72327 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72327' 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 72327 00:07:29.905 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 72327 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72327 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72327 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:30.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 72327 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72327 ']' 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.476 ERROR: process (pid: 72327) is no longer running 00:07:30.476 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72327) - No such process 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:30.476 ************************************ 00:07:30.476 END TEST default_locks 00:07:30.476 ************************************ 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:30.476 00:07:30.476 real 0m1.870s 00:07:30.476 user 0m1.836s 00:07:30.476 sys 0m0.672s 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.476 15:07:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.476 15:07:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:30.476 15:07:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.476 15:07:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.476 15:07:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.476 ************************************ 00:07:30.476 START TEST default_locks_via_rpc 00:07:30.476 ************************************ 00:07:30.476 15:07:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:30.476 15:07:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72380 00:07:30.476 15:07:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72380 00:07:30.476 15:07:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.476 15:07:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72380 ']' 00:07:30.476 15:07:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.476 15:07:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.476 15:07:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.476 15:07:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.476 15:07:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.476 [2024-10-01 15:07:28.983525] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:30.476 [2024-10-01 15:07:28.983676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72380 ] 00:07:30.735 [2024-10-01 15:07:29.151403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.735 [2024-10-01 15:07:29.198414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.303 15:07:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.304 15:07:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.304 15:07:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72380 00:07:31.304 15:07:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.304 15:07:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72380 00:07:31.872 15:07:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72380 00:07:31.872 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 72380 ']' 00:07:31.872 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 72380 00:07:31.872 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:31.873 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.873 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72380 00:07:32.132 killing process with pid 72380 00:07:32.132 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.132 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.132 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72380' 00:07:32.132 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 72380 00:07:32.132 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 72380 00:07:32.390 00:07:32.390 real 0m1.981s 00:07:32.390 user 0m1.988s 00:07:32.390 sys 0m0.698s 00:07:32.390 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.390 15:07:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.390 ************************************ 00:07:32.390 END TEST default_locks_via_rpc 00:07:32.390 ************************************ 00:07:32.390 15:07:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:32.390 15:07:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.390 15:07:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.390 15:07:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.390 ************************************ 00:07:32.390 START TEST non_locking_app_on_locked_coremask 00:07:32.390 ************************************ 00:07:32.390 15:07:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:32.390 15:07:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72432 00:07:32.390 15:07:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.390 15:07:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72432 /var/tmp/spdk.sock 00:07:32.390 15:07:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72432 ']' 00:07:32.390 15:07:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.390 15:07:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.390 15:07:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.390 15:07:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.390 15:07:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.649 [2024-10-01 15:07:31.046404] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:32.649 [2024-10-01 15:07:31.046749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72432 ] 00:07:32.909 [2024-10-01 15:07:31.218542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.909 [2024-10-01 15:07:31.269414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.478 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.478 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:33.478 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72448 00:07:33.478 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:33.478 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72448 /var/tmp/spdk2.sock 00:07:33.478 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72448 ']' 00:07:33.479 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.479 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.479 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.479 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.479 15:07:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.479 [2024-10-01 15:07:31.974534] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:33.479 [2024-10-01 15:07:31.974894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72448 ] 00:07:33.738 [2024-10-01 15:07:32.143821] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.738 [2024-10-01 15:07:32.143898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.738 [2024-10-01 15:07:32.247328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.306 15:07:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.306 15:07:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:34.306 15:07:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72432 00:07:34.306 15:07:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72432 00:07:34.306 15:07:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72432 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72432 ']' 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72432 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72432 00:07:35.684 killing process with pid 72432 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72432' 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72432 00:07:35.684 15:07:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72432 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72448 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72448 ']' 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72448 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72448 00:07:36.252 killing process with pid 72448 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72448' 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72448 00:07:36.252 15:07:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72448 00:07:36.820 00:07:36.820 real 0m4.207s 00:07:36.820 user 0m4.429s 00:07:36.820 sys 0m1.387s 00:07:36.820 15:07:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.820 15:07:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.820 ************************************ 00:07:36.820 END TEST non_locking_app_on_locked_coremask 00:07:36.820 ************************************ 00:07:36.820 15:07:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:36.820 15:07:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.820 15:07:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.820 15:07:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.820 ************************************ 00:07:36.820 START TEST locking_app_on_unlocked_coremask 00:07:36.820 ************************************ 00:07:36.820 15:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:36.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.820 15:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72517 00:07:36.820 15:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72517 /var/tmp/spdk.sock 00:07:36.820 15:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72517 ']' 00:07:36.820 15:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.820 15:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.820 15:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:36.820 15:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.820 15:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.820 15:07:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.820 [2024-10-01 15:07:35.307748] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:36.820 [2024-10-01 15:07:35.307889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72517 ] 00:07:37.080 [2024-10-01 15:07:35.480492] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:37.080 [2024-10-01 15:07:35.480785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.080 [2024-10-01 15:07:35.528094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72533 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72533 /var/tmp/spdk2.sock 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72533 ']' 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.649 15:07:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.907 [2024-10-01 15:07:36.245964] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:37.907 [2024-10-01 15:07:36.246310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72533 ] 00:07:37.907 [2024-10-01 15:07:36.413969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.166 [2024-10-01 15:07:36.512921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.734 15:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.734 15:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:38.734 15:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72533 00:07:38.734 15:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72533 00:07:38.734 15:07:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72517 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72517 ']' 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72517 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72517 00:07:39.670 killing process with pid 72517 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72517' 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72517 00:07:39.670 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72517 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72533 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72533 ']' 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72533 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72533 00:07:40.608 killing process with pid 72533 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72533' 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72533 00:07:40.608 15:07:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72533 00:07:40.866 ************************************ 00:07:40.866 END TEST locking_app_on_unlocked_coremask 00:07:40.866 ************************************ 00:07:40.866 00:07:40.866 real 0m4.188s 00:07:40.866 user 0m4.489s 00:07:40.866 sys 0m1.324s 00:07:40.866 15:07:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.866 15:07:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.145 15:07:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:41.145 15:07:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.145 15:07:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.145 15:07:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.145 ************************************ 00:07:41.145 START TEST locking_app_on_locked_coremask 00:07:41.145 ************************************ 00:07:41.145 15:07:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:41.145 15:07:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72602 00:07:41.145 15:07:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:41.145 15:07:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72602 /var/tmp/spdk.sock 00:07:41.145 15:07:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72602 ']' 00:07:41.145 15:07:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.145 15:07:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.145 15:07:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.145 15:07:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.145 15:07:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.145 [2024-10-01 15:07:39.569404] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:41.145 [2024-10-01 15:07:39.569537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72602 ] 00:07:41.405 [2024-10-01 15:07:39.725829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.406 [2024-10-01 15:07:39.775094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72618 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72618 /var/tmp/spdk2.sock 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72618 /var/tmp/spdk2.sock 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72618 /var/tmp/spdk2.sock 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72618 ']' 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.974 15:07:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.974 [2024-10-01 15:07:40.521953] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:42.233 [2024-10-01 15:07:40.522297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72618 ] 00:07:42.234 [2024-10-01 15:07:40.691006] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72602 has claimed it. 00:07:42.234 [2024-10-01 15:07:40.691119] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:42.802 ERROR: process (pid: 72618) is no longer running 00:07:42.802 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72618) - No such process 00:07:42.802 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.802 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:42.802 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:42.802 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.802 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.802 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.802 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72602 00:07:42.802 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72602 00:07:42.802 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72602 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72602 ']' 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72602 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72602 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.372 killing process with pid 72602 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72602' 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72602 00:07:43.372 15:07:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72602 00:07:43.631 00:07:43.631 real 0m2.628s 00:07:43.631 user 0m2.860s 00:07:43.631 sys 0m0.839s 00:07:43.631 15:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.631 ************************************ 00:07:43.631 END TEST locking_app_on_locked_coremask 00:07:43.632 ************************************ 00:07:43.632 15:07:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.632 15:07:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:43.632 15:07:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.632 15:07:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.632 15:07:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.632 ************************************ 00:07:43.632 START TEST locking_overlapped_coremask 00:07:43.632 ************************************ 00:07:43.632 15:07:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:43.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.632 15:07:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72671 00:07:43.632 15:07:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72671 /var/tmp/spdk.sock 00:07:43.632 15:07:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:43.632 15:07:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72671 ']' 00:07:43.632 15:07:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.632 15:07:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.632 15:07:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.632 15:07:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.632 15:07:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.893 [2024-10-01 15:07:42.283268] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:43.893 [2024-10-01 15:07:42.283733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72671 ] 00:07:44.151 [2024-10-01 15:07:42.456617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.151 [2024-10-01 15:07:42.509479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.151 [2024-10-01 15:07:42.509574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.151 [2024-10-01 15:07:42.509662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72689 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72689 /var/tmp/spdk2.sock 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72689 /var/tmp/spdk2.sock 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72689 /var/tmp/spdk2.sock 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72689 ']' 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.716 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.716 [2024-10-01 15:07:43.204068] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:44.716 [2024-10-01 15:07:43.204453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72689 ] 00:07:44.973 [2024-10-01 15:07:43.373020] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72671 has claimed it. 00:07:44.973 [2024-10-01 15:07:43.373092] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:45.542 ERROR: process (pid: 72689) is no longer running 00:07:45.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72689) - No such process 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72671 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72671 ']' 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72671 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72671 00:07:45.542 killing process with pid 72671 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72671' 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72671 00:07:45.542 15:07:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72671 00:07:45.800 ************************************ 00:07:45.800 END TEST locking_overlapped_coremask 00:07:45.800 ************************************ 00:07:45.800 00:07:45.800 real 0m2.125s 00:07:45.800 user 0m5.514s 00:07:45.800 sys 0m0.605s 00:07:45.800 15:07:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.800 15:07:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.059 15:07:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:46.059 15:07:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.059 15:07:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.059 15:07:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.059 ************************************ 00:07:46.059 START TEST locking_overlapped_coremask_via_rpc 00:07:46.059 ************************************ 00:07:46.059 15:07:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:46.059 15:07:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72731 00:07:46.059 15:07:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72731 /var/tmp/spdk.sock 00:07:46.059 15:07:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:46.059 15:07:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72731 ']' 00:07:46.059 15:07:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.059 15:07:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.059 15:07:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.059 15:07:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.059 15:07:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.059 [2024-10-01 15:07:44.475829] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:46.059 [2024-10-01 15:07:44.476181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72731 ] 00:07:46.318 [2024-10-01 15:07:44.645429] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:46.318 [2024-10-01 15:07:44.645761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.318 [2024-10-01 15:07:44.698153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.318 [2024-10-01 15:07:44.698163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.318 [2024-10-01 15:07:44.698293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72749 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72749 /var/tmp/spdk2.sock 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72749 ']' 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.885 15:07:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 [2024-10-01 15:07:45.407026] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:46.885 [2024-10-01 15:07:45.407386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72749 ] 00:07:47.144 [2024-10-01 15:07:45.574097] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:47.144 [2024-10-01 15:07:45.574180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.144 [2024-10-01 15:07:45.680785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.144 [2024-10-01 15:07:45.684301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.144 [2024-10-01 15:07:45.684404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.711 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.970 [2024-10-01 15:07:46.263420] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72731 has claimed it. 00:07:47.970 request: 00:07:47.970 { 00:07:47.970 "method": "framework_enable_cpumask_locks", 00:07:47.970 "req_id": 1 00:07:47.970 } 00:07:47.970 Got JSON-RPC error response 00:07:47.970 response: 00:07:47.970 { 00:07:47.970 "code": -32603, 00:07:47.970 "message": "Failed to claim CPU core: 2" 00:07:47.970 } 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72731 /var/tmp/spdk.sock 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72731 ']' 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72749 /var/tmp/spdk2.sock 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72749 ']' 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.970 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.971 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.230 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.230 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:48.230 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:48.230 ************************************ 00:07:48.230 END TEST locking_overlapped_coremask_via_rpc 00:07:48.230 ************************************ 00:07:48.230 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:48.230 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:48.230 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:48.230 00:07:48.230 real 0m2.354s 00:07:48.230 user 0m1.074s 00:07:48.230 sys 0m0.202s 00:07:48.230 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.230 15:07:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.488 15:07:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:48.488 15:07:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72731 ]] 00:07:48.488 15:07:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72731 00:07:48.488 15:07:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72731 ']' 00:07:48.488 15:07:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72731 00:07:48.488 15:07:46 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:48.488 15:07:46 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.488 15:07:46 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72731 00:07:48.488 killing process with pid 72731 00:07:48.488 15:07:46 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.488 15:07:46 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.488 15:07:46 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72731' 00:07:48.488 15:07:46 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72731 00:07:48.488 15:07:46 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72731 00:07:48.747 15:07:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72749 ]] 00:07:48.747 15:07:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72749 00:07:48.747 15:07:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72749 ']' 00:07:48.747 15:07:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72749 00:07:48.747 15:07:47 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:48.747 15:07:47 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.747 15:07:47 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72749 00:07:48.747 killing process with pid 72749 00:07:48.747 15:07:47 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:48.747 15:07:47 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:48.747 15:07:47 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72749' 00:07:48.747 15:07:47 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72749 00:07:48.747 15:07:47 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72749 00:07:49.315 15:07:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:49.315 15:07:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:49.315 15:07:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72731 ]] 00:07:49.315 15:07:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72731 00:07:49.315 Process with pid 72731 is not found 00:07:49.315 15:07:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72731 ']' 00:07:49.315 15:07:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72731 00:07:49.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72731) - No such process 00:07:49.315 15:07:47 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72731 is not found' 00:07:49.315 15:07:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72749 ]] 00:07:49.315 15:07:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72749 00:07:49.315 15:07:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72749 ']' 00:07:49.315 15:07:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72749 00:07:49.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72749) - No such process 00:07:49.315 15:07:47 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72749 is not found' 00:07:49.315 Process with pid 72749 is not found 00:07:49.315 15:07:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:49.315 00:07:49.315 real 0m21.050s 00:07:49.315 user 0m33.871s 00:07:49.315 sys 0m6.933s 00:07:49.315 15:07:47 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.315 15:07:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.315 ************************************ 00:07:49.315 END TEST cpu_locks 00:07:49.315 ************************************ 00:07:49.315 ************************************ 00:07:49.315 END TEST event 00:07:49.315 ************************************ 00:07:49.316 00:07:49.316 real 0m49.926s 00:07:49.316 user 1m33.386s 00:07:49.316 sys 0m11.496s 00:07:49.316 15:07:47 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.316 15:07:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:49.316 15:07:47 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:49.316 15:07:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.316 15:07:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.316 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:07:49.573 ************************************ 00:07:49.574 START TEST thread 00:07:49.574 ************************************ 00:07:49.574 15:07:47 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:49.574 * Looking for test storage... 00:07:49.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:49.574 15:07:47 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:49.574 15:07:47 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:49.574 15:07:47 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:49.574 15:07:48 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:49.574 15:07:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.574 15:07:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.574 15:07:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.574 15:07:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.574 15:07:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.574 15:07:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.574 15:07:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.574 15:07:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.574 15:07:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.574 15:07:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.574 15:07:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.574 15:07:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:49.574 15:07:48 thread -- scripts/common.sh@345 -- # : 1 00:07:49.574 15:07:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.574 15:07:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.574 15:07:48 thread -- scripts/common.sh@365 -- # decimal 1 00:07:49.574 15:07:48 thread -- scripts/common.sh@353 -- # local d=1 00:07:49.574 15:07:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.574 15:07:48 thread -- scripts/common.sh@355 -- # echo 1 00:07:49.574 15:07:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.574 15:07:48 thread -- scripts/common.sh@366 -- # decimal 2 00:07:49.574 15:07:48 thread -- scripts/common.sh@353 -- # local d=2 00:07:49.574 15:07:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.574 15:07:48 thread -- scripts/common.sh@355 -- # echo 2 00:07:49.574 15:07:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.574 15:07:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.574 15:07:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.574 15:07:48 thread -- scripts/common.sh@368 -- # return 0 00:07:49.574 15:07:48 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.574 15:07:48 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.574 --rc genhtml_branch_coverage=1 00:07:49.574 --rc genhtml_function_coverage=1 00:07:49.574 --rc genhtml_legend=1 00:07:49.574 --rc geninfo_all_blocks=1 00:07:49.574 --rc geninfo_unexecuted_blocks=1 00:07:49.574 00:07:49.574 ' 00:07:49.574 15:07:48 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.574 --rc genhtml_branch_coverage=1 00:07:49.574 --rc genhtml_function_coverage=1 00:07:49.574 --rc genhtml_legend=1 00:07:49.574 --rc geninfo_all_blocks=1 00:07:49.574 --rc geninfo_unexecuted_blocks=1 00:07:49.574 00:07:49.574 ' 00:07:49.574 15:07:48 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.574 --rc genhtml_branch_coverage=1 00:07:49.574 --rc genhtml_function_coverage=1 00:07:49.574 --rc genhtml_legend=1 00:07:49.574 --rc geninfo_all_blocks=1 00:07:49.574 --rc geninfo_unexecuted_blocks=1 00:07:49.574 00:07:49.574 ' 00:07:49.574 15:07:48 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:49.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.574 --rc genhtml_branch_coverage=1 00:07:49.574 --rc genhtml_function_coverage=1 00:07:49.574 --rc genhtml_legend=1 00:07:49.574 --rc geninfo_all_blocks=1 00:07:49.574 --rc geninfo_unexecuted_blocks=1 00:07:49.574 00:07:49.574 ' 00:07:49.574 15:07:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:49.574 15:07:48 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:49.574 15:07:48 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.574 15:07:48 thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.574 ************************************ 00:07:49.574 START TEST thread_poller_perf 00:07:49.574 ************************************ 00:07:49.574 15:07:48 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:49.832 [2024-10-01 15:07:48.162728] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:49.832 [2024-10-01 15:07:48.163448] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72882 ] 00:07:49.832 [2024-10-01 15:07:48.328990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.832 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:49.832 [2024-10-01 15:07:48.375810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.209 ====================================== 00:07:51.209 busy:2500434088 (cyc) 00:07:51.209 total_run_count: 388000 00:07:51.209 tsc_hz: 2490000000 (cyc) 00:07:51.209 ====================================== 00:07:51.209 poller_cost: 6444 (cyc), 2587 (nsec) 00:07:51.209 ************************************ 00:07:51.209 END TEST thread_poller_perf 00:07:51.209 ************************************ 00:07:51.209 00:07:51.209 real 0m1.364s 00:07:51.209 user 0m1.146s 00:07:51.209 sys 0m0.111s 00:07:51.209 15:07:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.209 15:07:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:51.209 15:07:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:51.209 15:07:49 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:51.209 15:07:49 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.209 15:07:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.209 ************************************ 00:07:51.209 START TEST thread_poller_perf 00:07:51.209 ************************************ 00:07:51.209 15:07:49 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:51.209 [2024-10-01 15:07:49.596861] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:51.209 [2024-10-01 15:07:49.597016] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72918 ] 00:07:51.469 [2024-10-01 15:07:49.764716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.469 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:51.469 [2024-10-01 15:07:49.815396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.407 ====================================== 00:07:52.407 busy:2493994666 (cyc) 00:07:52.407 total_run_count: 5081000 00:07:52.407 tsc_hz: 2490000000 (cyc) 00:07:52.407 ====================================== 00:07:52.407 poller_cost: 490 (cyc), 196 (nsec) 00:07:52.407 ************************************ 00:07:52.407 END TEST thread_poller_perf 00:07:52.407 ************************************ 00:07:52.407 00:07:52.407 real 0m1.363s 00:07:52.407 user 0m1.156s 00:07:52.407 sys 0m0.101s 00:07:52.407 15:07:50 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.407 15:07:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:52.666 15:07:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:52.666 ************************************ 00:07:52.666 END TEST thread 00:07:52.666 ************************************ 00:07:52.666 00:07:52.666 real 0m3.111s 00:07:52.666 user 0m2.488s 00:07:52.666 sys 0m0.416s 00:07:52.666 15:07:50 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.667 15:07:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.667 15:07:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:52.667 15:07:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:52.667 15:07:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.667 15:07:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.667 15:07:51 -- common/autotest_common.sh@10 -- # set +x 00:07:52.667 ************************************ 00:07:52.667 START TEST app_cmdline 00:07:52.667 ************************************ 00:07:52.667 15:07:51 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:52.667 * Looking for test storage... 00:07:52.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:52.667 15:07:51 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:52.667 15:07:51 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:52.667 15:07:51 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:52.925 15:07:51 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.925 15:07:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:52.925 15:07:51 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.925 15:07:51 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:52.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.925 --rc genhtml_branch_coverage=1 00:07:52.925 --rc genhtml_function_coverage=1 00:07:52.925 --rc genhtml_legend=1 00:07:52.925 --rc geninfo_all_blocks=1 00:07:52.925 --rc geninfo_unexecuted_blocks=1 00:07:52.925 00:07:52.925 ' 00:07:52.925 15:07:51 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:52.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.925 --rc genhtml_branch_coverage=1 00:07:52.925 --rc genhtml_function_coverage=1 00:07:52.925 --rc genhtml_legend=1 00:07:52.925 --rc geninfo_all_blocks=1 00:07:52.925 --rc geninfo_unexecuted_blocks=1 00:07:52.925 00:07:52.925 ' 00:07:52.925 15:07:51 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:52.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.925 --rc genhtml_branch_coverage=1 00:07:52.925 --rc genhtml_function_coverage=1 00:07:52.925 --rc genhtml_legend=1 00:07:52.925 --rc geninfo_all_blocks=1 00:07:52.925 --rc geninfo_unexecuted_blocks=1 00:07:52.925 00:07:52.925 ' 00:07:52.925 15:07:51 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:52.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.925 --rc genhtml_branch_coverage=1 00:07:52.925 --rc genhtml_function_coverage=1 00:07:52.925 --rc genhtml_legend=1 00:07:52.925 --rc geninfo_all_blocks=1 00:07:52.925 --rc geninfo_unexecuted_blocks=1 00:07:52.925 00:07:52.925 ' 00:07:52.925 15:07:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:52.925 15:07:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73002 00:07:52.925 15:07:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:52.925 15:07:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73002 00:07:52.925 15:07:51 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 73002 ']' 00:07:52.925 15:07:51 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.926 15:07:51 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.926 15:07:51 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.926 15:07:51 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.926 15:07:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:52.926 [2024-10-01 15:07:51.381295] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:52.926 [2024-10-01 15:07:51.382021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73002 ] 00:07:53.185 [2024-10-01 15:07:51.550227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.185 [2024-10-01 15:07:51.597358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.754 15:07:52 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.754 15:07:52 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:53.754 15:07:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:54.013 { 00:07:54.013 "version": "SPDK v25.01-pre git sha1 e9b861378", 00:07:54.013 "fields": { 00:07:54.013 "major": 25, 00:07:54.013 "minor": 1, 00:07:54.013 "patch": 0, 00:07:54.013 "suffix": "-pre", 00:07:54.013 "commit": "e9b861378" 00:07:54.013 } 00:07:54.013 } 00:07:54.013 15:07:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:54.013 15:07:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:54.013 15:07:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:54.013 15:07:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:54.013 15:07:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.013 15:07:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.013 15:07:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.013 15:07:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:54.013 15:07:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:54.013 15:07:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:54.013 15:07:52 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.272 request: 00:07:54.272 { 00:07:54.272 "method": "env_dpdk_get_mem_stats", 00:07:54.272 "req_id": 1 00:07:54.272 } 00:07:54.272 Got JSON-RPC error response 00:07:54.272 response: 00:07:54.272 { 00:07:54.272 "code": -32601, 00:07:54.272 "message": "Method not found" 00:07:54.272 } 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.272 15:07:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73002 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 73002 ']' 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 73002 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73002 00:07:54.272 killing process with pid 73002 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73002' 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@969 -- # kill 73002 00:07:54.272 15:07:52 app_cmdline -- common/autotest_common.sh@974 -- # wait 73002 00:07:54.839 ************************************ 00:07:54.839 END TEST app_cmdline 00:07:54.839 ************************************ 00:07:54.839 00:07:54.839 real 0m2.106s 00:07:54.839 user 0m2.322s 00:07:54.839 sys 0m0.627s 00:07:54.839 15:07:53 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.839 15:07:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.839 15:07:53 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:54.839 15:07:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.839 15:07:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.839 15:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:54.839 ************************************ 00:07:54.839 START TEST version 00:07:54.839 ************************************ 00:07:54.839 15:07:53 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:54.839 * Looking for test storage... 00:07:54.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:54.839 15:07:53 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:54.839 15:07:53 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:54.839 15:07:53 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:55.098 15:07:53 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:55.098 15:07:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.098 15:07:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.098 15:07:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.098 15:07:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.098 15:07:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.098 15:07:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.098 15:07:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.098 15:07:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.098 15:07:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.098 15:07:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.098 15:07:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.098 15:07:53 version -- scripts/common.sh@344 -- # case "$op" in 00:07:55.098 15:07:53 version -- scripts/common.sh@345 -- # : 1 00:07:55.098 15:07:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.098 15:07:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.098 15:07:53 version -- scripts/common.sh@365 -- # decimal 1 00:07:55.098 15:07:53 version -- scripts/common.sh@353 -- # local d=1 00:07:55.098 15:07:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.098 15:07:53 version -- scripts/common.sh@355 -- # echo 1 00:07:55.098 15:07:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.098 15:07:53 version -- scripts/common.sh@366 -- # decimal 2 00:07:55.098 15:07:53 version -- scripts/common.sh@353 -- # local d=2 00:07:55.098 15:07:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.098 15:07:53 version -- scripts/common.sh@355 -- # echo 2 00:07:55.098 15:07:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.098 15:07:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.098 15:07:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.098 15:07:53 version -- scripts/common.sh@368 -- # return 0 00:07:55.098 15:07:53 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.098 15:07:53 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:55.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.098 --rc genhtml_branch_coverage=1 00:07:55.098 --rc genhtml_function_coverage=1 00:07:55.098 --rc genhtml_legend=1 00:07:55.098 --rc geninfo_all_blocks=1 00:07:55.098 --rc geninfo_unexecuted_blocks=1 00:07:55.098 00:07:55.098 ' 00:07:55.098 15:07:53 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:55.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.098 --rc genhtml_branch_coverage=1 00:07:55.098 --rc genhtml_function_coverage=1 00:07:55.098 --rc genhtml_legend=1 00:07:55.098 --rc geninfo_all_blocks=1 00:07:55.098 --rc geninfo_unexecuted_blocks=1 00:07:55.098 00:07:55.098 ' 00:07:55.098 15:07:53 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:55.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.098 --rc genhtml_branch_coverage=1 00:07:55.098 --rc genhtml_function_coverage=1 00:07:55.098 --rc genhtml_legend=1 00:07:55.098 --rc geninfo_all_blocks=1 00:07:55.098 --rc geninfo_unexecuted_blocks=1 00:07:55.098 00:07:55.098 ' 00:07:55.098 15:07:53 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:55.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.098 --rc genhtml_branch_coverage=1 00:07:55.098 --rc genhtml_function_coverage=1 00:07:55.098 --rc genhtml_legend=1 00:07:55.098 --rc geninfo_all_blocks=1 00:07:55.098 --rc geninfo_unexecuted_blocks=1 00:07:55.098 00:07:55.098 ' 00:07:55.098 15:07:53 version -- app/version.sh@17 -- # get_header_version major 00:07:55.098 15:07:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.098 15:07:53 version -- app/version.sh@14 -- # cut -f2 00:07:55.098 15:07:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.098 15:07:53 version -- app/version.sh@17 -- # major=25 00:07:55.098 15:07:53 version -- app/version.sh@18 -- # get_header_version minor 00:07:55.098 15:07:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.098 15:07:53 version -- app/version.sh@14 -- # cut -f2 00:07:55.098 15:07:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.098 15:07:53 version -- app/version.sh@18 -- # minor=1 00:07:55.098 15:07:53 version -- app/version.sh@19 -- # get_header_version patch 00:07:55.098 15:07:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.098 15:07:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.098 15:07:53 version -- app/version.sh@14 -- # cut -f2 00:07:55.098 15:07:53 version -- app/version.sh@19 -- # patch=0 00:07:55.098 15:07:53 version -- app/version.sh@20 -- # get_header_version suffix 00:07:55.098 15:07:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.098 15:07:53 version -- app/version.sh@14 -- # cut -f2 00:07:55.098 15:07:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.098 15:07:53 version -- app/version.sh@20 -- # suffix=-pre 00:07:55.098 15:07:53 version -- app/version.sh@22 -- # version=25.1 00:07:55.098 15:07:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:55.098 15:07:53 version -- app/version.sh@28 -- # version=25.1rc0 00:07:55.098 15:07:53 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:55.098 15:07:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:55.098 15:07:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:55.098 15:07:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:55.098 ************************************ 00:07:55.098 END TEST version 00:07:55.098 ************************************ 00:07:55.098 00:07:55.098 real 0m0.322s 00:07:55.098 user 0m0.178s 00:07:55.098 sys 0m0.200s 00:07:55.098 15:07:53 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.098 15:07:53 version -- common/autotest_common.sh@10 -- # set +x 00:07:55.098 15:07:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:55.098 15:07:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:55.099 15:07:53 -- spdk/autotest.sh@194 -- # uname -s 00:07:55.099 15:07:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:55.099 15:07:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:55.099 15:07:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:55.099 15:07:53 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:55.099 15:07:53 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:55.099 15:07:53 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:55.099 15:07:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.099 15:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:55.099 ************************************ 00:07:55.099 START TEST blockdev_nvme 00:07:55.099 ************************************ 00:07:55.099 15:07:53 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:55.358 * Looking for test storage... 00:07:55.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:55.358 15:07:53 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:55.358 15:07:53 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:55.358 15:07:53 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:55.358 15:07:53 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.358 15:07:53 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:55.358 15:07:53 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.358 15:07:53 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:55.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.358 --rc genhtml_branch_coverage=1 00:07:55.358 --rc genhtml_function_coverage=1 00:07:55.358 --rc genhtml_legend=1 00:07:55.358 --rc geninfo_all_blocks=1 00:07:55.358 --rc geninfo_unexecuted_blocks=1 00:07:55.358 00:07:55.358 ' 00:07:55.358 15:07:53 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:55.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.358 --rc genhtml_branch_coverage=1 00:07:55.358 --rc genhtml_function_coverage=1 00:07:55.358 --rc genhtml_legend=1 00:07:55.358 --rc geninfo_all_blocks=1 00:07:55.358 --rc geninfo_unexecuted_blocks=1 00:07:55.358 00:07:55.358 ' 00:07:55.358 15:07:53 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:55.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.358 --rc genhtml_branch_coverage=1 00:07:55.358 --rc genhtml_function_coverage=1 00:07:55.358 --rc genhtml_legend=1 00:07:55.358 --rc geninfo_all_blocks=1 00:07:55.358 --rc geninfo_unexecuted_blocks=1 00:07:55.358 00:07:55.358 ' 00:07:55.358 15:07:53 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:55.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.358 --rc genhtml_branch_coverage=1 00:07:55.358 --rc genhtml_function_coverage=1 00:07:55.358 --rc genhtml_legend=1 00:07:55.358 --rc geninfo_all_blocks=1 00:07:55.358 --rc geninfo_unexecuted_blocks=1 00:07:55.358 00:07:55.358 ' 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:55.358 15:07:53 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:55.358 15:07:53 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:55.359 15:07:53 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:55.359 15:07:53 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:55.359 15:07:53 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:55.359 15:07:53 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73168 00:07:55.359 15:07:53 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:55.359 15:07:53 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:55.359 15:07:53 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73168 00:07:55.359 15:07:53 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 73168 ']' 00:07:55.359 15:07:53 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.359 15:07:53 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.359 15:07:53 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.359 15:07:53 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.359 15:07:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.619 [2024-10-01 15:07:53.981000] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:55.619 [2024-10-01 15:07:53.981138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73168 ] 00:07:55.619 [2024-10-01 15:07:54.142772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.878 [2024-10-01 15:07:54.188012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.455 15:07:54 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.455 15:07:54 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:56.455 15:07:54 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:56.455 15:07:54 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:56.455 15:07:54 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:56.455 15:07:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:56.455 15:07:54 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:56.455 15:07:54 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:56.455 15:07:54 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.455 15:07:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.714 15:07:55 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.714 15:07:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:56.714 15:07:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.714 15:07:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.714 15:07:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.714 15:07:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.714 15:07:55 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:56.973 15:07:55 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:56.973 15:07:55 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.973 15:07:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:56.973 15:07:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:56.973 15:07:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "adb8be16-22d8-4541-9725-4de13e47f58b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "adb8be16-22d8-4541-9725-4de13e47f58b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "cc0c1cad-84d1-4ef2-b092-edac4ef99a75"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cc0c1cad-84d1-4ef2-b092-edac4ef99a75",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "46e7da2f-98fe-418d-a914-3d55d86f7c04"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "46e7da2f-98fe-418d-a914-3d55d86f7c04",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "582f8ab2-8362-4528-8daa-90c9fcf84611"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "582f8ab2-8362-4528-8daa-90c9fcf84611",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "164ae9bc-250d-4b86-9013-9cfffc95eedd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "164ae9bc-250d-4b86-9013-9cfffc95eedd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "998e7156-f7ff-4e5c-b196-011da8595b7a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "998e7156-f7ff-4e5c-b196-011da8595b7a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:56.973 15:07:55 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:56.973 15:07:55 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:56.973 15:07:55 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:56.973 15:07:55 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 73168 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 73168 ']' 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 73168 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73168 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.973 killing process with pid 73168 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73168' 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 73168 00:07:56.973 15:07:55 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 73168 00:07:57.542 15:07:55 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:57.542 15:07:55 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:57.542 15:07:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:57.542 15:07:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.542 15:07:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.542 ************************************ 00:07:57.542 START TEST bdev_hello_world 00:07:57.542 ************************************ 00:07:57.542 15:07:55 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:57.542 [2024-10-01 15:07:55.973289] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:57.542 [2024-10-01 15:07:55.973441] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73241 ] 00:07:57.802 [2024-10-01 15:07:56.143229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.802 [2024-10-01 15:07:56.193016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.061 [2024-10-01 15:07:56.583803] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:58.061 [2024-10-01 15:07:56.583864] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:58.061 [2024-10-01 15:07:56.583889] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:58.061 [2024-10-01 15:07:56.586289] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:58.061 [2024-10-01 15:07:56.587032] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:58.061 [2024-10-01 15:07:56.587071] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:58.061 [2024-10-01 15:07:56.587337] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:58.061 00:07:58.061 [2024-10-01 15:07:56.587368] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:58.321 00:07:58.321 real 0m0.932s 00:07:58.321 user 0m0.579s 00:07:58.321 sys 0m0.250s 00:07:58.321 15:07:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.321 15:07:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:58.321 ************************************ 00:07:58.321 END TEST bdev_hello_world 00:07:58.321 ************************************ 00:07:58.582 15:07:56 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:58.582 15:07:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:58.582 15:07:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.582 15:07:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.582 ************************************ 00:07:58.582 START TEST bdev_bounds 00:07:58.582 ************************************ 00:07:58.582 15:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:58.582 15:07:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:58.582 15:07:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73272 00:07:58.582 15:07:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:58.582 Process bdevio pid: 73272 00:07:58.582 15:07:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73272' 00:07:58.582 15:07:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73272 00:07:58.582 15:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73272 ']' 00:07:58.582 15:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.582 15:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.583 15:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.583 15:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.583 15:07:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:58.583 [2024-10-01 15:07:56.975477] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:07:58.583 [2024-10-01 15:07:56.975627] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73272 ] 00:07:58.843 [2024-10-01 15:07:57.145682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.843 [2024-10-01 15:07:57.196128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.843 [2024-10-01 15:07:57.196359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.843 [2024-10-01 15:07:57.196445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.414 15:07:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.414 15:07:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:59.414 15:07:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:59.414 I/O targets: 00:07:59.414 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:59.414 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:59.414 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:59.414 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:59.414 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:59.414 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:59.414 00:07:59.414 00:07:59.414 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.414 http://cunit.sourceforge.net/ 00:07:59.414 00:07:59.414 00:07:59.414 Suite: bdevio tests on: Nvme3n1 00:07:59.414 Test: blockdev write read block ...passed 00:07:59.414 Test: blockdev write zeroes read block ...passed 00:07:59.414 Test: blockdev write zeroes read no split ...passed 00:07:59.414 Test: blockdev write zeroes read split ...passed 00:07:59.414 Test: blockdev write zeroes read split partial ...passed 00:07:59.414 Test: blockdev reset ...[2024-10-01 15:07:57.956740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:59.414 [2024-10-01 15:07:57.959158] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.414 passed 00:07:59.414 Test: blockdev write read 8 blocks ...passed 00:07:59.414 Test: blockdev write read size > 128k ...passed 00:07:59.414 Test: blockdev write read invalid size ...passed 00:07:59.414 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.414 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.414 Test: blockdev write read max offset ...passed 00:07:59.414 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.675 Test: blockdev writev readv 8 blocks ...passed 00:07:59.675 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.675 Test: blockdev writev readv block ...passed 00:07:59.675 Test: blockdev writev readv size > 128k ...passed 00:07:59.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.675 Test: blockdev comparev and writev ...[2024-10-01 15:07:57.965674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a7c06000 len:0x1000 00:07:59.675 [2024-10-01 15:07:57.965722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.675 passed 00:07:59.675 Test: blockdev nvme passthru rw ...passed 00:07:59.675 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:07:57.966689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:59.675 [2024-10-01 15:07:57.966730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:59.675 passed 00:07:59.675 Test: blockdev nvme admin passthru ...passed 00:07:59.675 Test: blockdev copy ...passed 00:07:59.675 Suite: bdevio tests on: Nvme2n3 00:07:59.675 Test: blockdev write read block ...passed 00:07:59.675 Test: blockdev write zeroes read block ...passed 00:07:59.675 Test: blockdev write zeroes read no split ...passed 00:07:59.675 Test: blockdev write zeroes read split ...passed 00:07:59.675 Test: blockdev write zeroes read split partial ...passed 00:07:59.675 Test: blockdev reset ...[2024-10-01 15:07:57.982994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:59.675 passed 00:07:59.675 Test: blockdev write read 8 blocks ...[2024-10-01 15:07:57.985394] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.675 passed 00:07:59.675 Test: blockdev write read size > 128k ...passed 00:07:59.675 Test: blockdev write read invalid size ...passed 00:07:59.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.675 Test: blockdev write read max offset ...passed 00:07:59.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.675 Test: blockdev writev readv 8 blocks ...passed 00:07:59.675 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.675 Test: blockdev writev readv block ...passed 00:07:59.675 Test: blockdev writev readv size > 128k ...passed 00:07:59.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.675 Test: blockdev comparev and writev ...[2024-10-01 15:07:57.992188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8005000 len:0x1000 00:07:59.675 [2024-10-01 15:07:57.992258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.675 passed 00:07:59.675 Test: blockdev nvme passthru rw ...passed 00:07:59.675 Test: blockdev nvme passthru vendor specific ...passed 00:07:59.675 Test: blockdev nvme admin passthru ...[2024-10-01 15:07:57.993156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:59.675 [2024-10-01 15:07:57.993203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:59.675 passed 00:07:59.675 Test: blockdev copy ...passed 00:07:59.675 Suite: bdevio tests on: Nvme2n2 00:07:59.675 Test: blockdev write read block ...passed 00:07:59.675 Test: blockdev write zeroes read block ...passed 00:07:59.675 Test: blockdev write zeroes read no split ...passed 00:07:59.675 Test: blockdev write zeroes read split ...passed 00:07:59.675 Test: blockdev write zeroes read split partial ...passed 00:07:59.675 Test: blockdev reset ...[2024-10-01 15:07:58.022213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:59.675 [2024-10-01 15:07:58.024682] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.675 passed 00:07:59.675 Test: blockdev write read 8 blocks ...passed 00:07:59.675 Test: blockdev write read size > 128k ...passed 00:07:59.675 Test: blockdev write read invalid size ...passed 00:07:59.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.675 Test: blockdev write read max offset ...passed 00:07:59.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.675 Test: blockdev writev readv 8 blocks ...passed 00:07:59.675 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.675 Test: blockdev writev readv block ...passed 00:07:59.675 Test: blockdev writev readv size > 128k ...passed 00:07:59.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.675 Test: blockdev comparev and writev ...[2024-10-01 15:07:58.031489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8436000 len:0x1000 00:07:59.675 [2024-10-01 15:07:58.031535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.675 passed 00:07:59.675 Test: blockdev nvme passthru rw ...passed 00:07:59.675 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:07:58.032421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:59.675 [2024-10-01 15:07:58.032460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:59.675 passed 00:07:59.675 Test: blockdev nvme admin passthru ...passed 00:07:59.675 Test: blockdev copy ...passed 00:07:59.675 Suite: bdevio tests on: Nvme2n1 00:07:59.675 Test: blockdev write read block ...passed 00:07:59.675 Test: blockdev write zeroes read block ...passed 00:07:59.675 Test: blockdev write zeroes read no split ...passed 00:07:59.675 Test: blockdev write zeroes read split ...passed 00:07:59.675 Test: blockdev write zeroes read split partial ...passed 00:07:59.675 Test: blockdev reset ...[2024-10-01 15:07:58.063224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:59.675 passed 00:07:59.675 Test: blockdev write read 8 blocks ...[2024-10-01 15:07:58.065762] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.675 passed 00:07:59.675 Test: blockdev write read size > 128k ...passed 00:07:59.675 Test: blockdev write read invalid size ...passed 00:07:59.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.675 Test: blockdev write read max offset ...passed 00:07:59.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.675 Test: blockdev writev readv 8 blocks ...passed 00:07:59.675 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.675 Test: blockdev writev readv block ...passed 00:07:59.676 Test: blockdev writev readv size > 128k ...passed 00:07:59.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.676 Test: blockdev comparev and writev ...[2024-10-01 15:07:58.072586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8430000 len:0x1000 00:07:59.676 [2024-10-01 15:07:58.072641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.676 passed 00:07:59.676 Test: blockdev nvme passthru rw ...passed 00:07:59.676 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:07:58.073476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:59.676 [2024-10-01 15:07:58.073510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:59.676 passed 00:07:59.676 Test: blockdev nvme admin passthru ...passed 00:07:59.676 Test: blockdev copy ...passed 00:07:59.676 Suite: bdevio tests on: Nvme1n1 00:07:59.676 Test: blockdev write read block ...passed 00:07:59.676 Test: blockdev write zeroes read block ...passed 00:07:59.676 Test: blockdev write zeroes read no split ...passed 00:07:59.676 Test: blockdev write zeroes read split ...passed 00:07:59.676 Test: blockdev write zeroes read split partial ...passed 00:07:59.676 Test: blockdev reset ...[2024-10-01 15:07:58.102619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:59.676 [2024-10-01 15:07:58.104752] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.676 passed 00:07:59.676 Test: blockdev write read 8 blocks ...passed 00:07:59.676 Test: blockdev write read size > 128k ...passed 00:07:59.676 Test: blockdev write read invalid size ...passed 00:07:59.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.676 Test: blockdev write read max offset ...passed 00:07:59.676 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.676 Test: blockdev writev readv 8 blocks ...passed 00:07:59.676 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.676 Test: blockdev writev readv block ...passed 00:07:59.676 Test: blockdev writev readv size > 128k ...passed 00:07:59.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.676 Test: blockdev comparev and writev ...[2024-10-01 15:07:58.111526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d842c000 len:0x1000 00:07:59.676 [2024-10-01 15:07:58.111570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.676 passed 00:07:59.676 Test: blockdev nvme passthru rw ...passed 00:07:59.676 Test: blockdev nvme passthru vendor specific ...passed 00:07:59.676 Test: blockdev nvme admin passthru ...[2024-10-01 15:07:58.112452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:59.676 [2024-10-01 15:07:58.112487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:59.676 passed 00:07:59.676 Test: blockdev copy ...passed 00:07:59.676 Suite: bdevio tests on: Nvme0n1 00:07:59.676 Test: blockdev write read block ...passed 00:07:59.676 Test: blockdev write zeroes read block ...passed 00:07:59.676 Test: blockdev write zeroes read no split ...passed 00:07:59.676 Test: blockdev write zeroes read split ...passed 00:07:59.676 Test: blockdev write zeroes read split partial ...passed 00:07:59.676 Test: blockdev reset ...[2024-10-01 15:07:58.130912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:59.676 [2024-10-01 15:07:58.132958] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.676 passed 00:07:59.676 Test: blockdev write read 8 blocks ...passed 00:07:59.676 Test: blockdev write read size > 128k ...passed 00:07:59.676 Test: blockdev write read invalid size ...passed 00:07:59.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.676 Test: blockdev write read max offset ...passed 00:07:59.676 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.676 Test: blockdev writev readv 8 blocks ...passed 00:07:59.676 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.676 Test: blockdev writev readv block ...passed 00:07:59.676 Test: blockdev writev readv size > 128k ...passed 00:07:59.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.676 Test: blockdev comparev and writev ...passed 00:07:59.676 Test: blockdev nvme passthru rw ...[2024-10-01 15:07:58.138928] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:59.676 separate metadata which is not supported yet. 00:07:59.676 passed 00:07:59.676 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:07:58.139543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:59.676 [2024-10-01 15:07:58.139589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:59.676 passed 00:07:59.676 Test: blockdev nvme admin passthru ...passed 00:07:59.676 Test: blockdev copy ...passed 00:07:59.676 00:07:59.676 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.676 suites 6 6 n/a 0 0 00:07:59.676 tests 138 138 138 0 0 00:07:59.676 asserts 893 893 893 0 n/a 00:07:59.676 00:07:59.676 Elapsed time = 0.480 seconds 00:07:59.676 0 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73272 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73272 ']' 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73272 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73272 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73272' 00:07:59.676 killing process with pid 73272 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73272 00:07:59.676 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73272 00:07:59.936 15:07:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:59.936 00:07:59.936 real 0m1.541s 00:07:59.936 user 0m3.697s 00:07:59.936 sys 0m0.412s 00:07:59.936 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.936 15:07:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:59.936 ************************************ 00:07:59.936 END TEST bdev_bounds 00:07:59.936 ************************************ 00:08:00.196 15:07:58 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:00.196 15:07:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:00.196 15:07:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.196 15:07:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.196 ************************************ 00:08:00.196 START TEST bdev_nbd 00:08:00.196 ************************************ 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73321 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73321 /var/tmp/spdk-nbd.sock 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73321 ']' 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.196 15:07:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:00.196 [2024-10-01 15:07:58.629227] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:00.196 [2024-10-01 15:07:58.629372] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.456 [2024-10-01 15:07:58.799707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.456 [2024-10-01 15:07:58.846474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:01.024 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:01.284 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.284 1+0 records in 00:08:01.284 1+0 records out 00:08:01.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639371 s, 6.4 MB/s 00:08:01.285 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.285 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:01.285 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.285 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:01.285 15:07:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:01.285 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.285 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:01.285 15:07:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.545 1+0 records in 00:08:01.545 1+0 records out 00:08:01.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717734 s, 5.7 MB/s 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:01.545 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.805 1+0 records in 00:08:01.805 1+0 records out 00:08:01.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540215 s, 7.6 MB/s 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:01.805 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.066 1+0 records in 00:08:02.066 1+0 records out 00:08:02.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521593 s, 7.9 MB/s 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:02.066 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:02.325 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:02.325 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.589 1+0 records in 00:08:02.589 1+0 records out 00:08:02.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623839 s, 6.6 MB/s 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:02.589 15:08:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:02.589 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:02.589 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:02.589 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.848 1+0 records in 00:08:02.848 1+0 records out 00:08:02.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802913 s, 5.1 MB/s 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd0", 00:08:02.848 "bdev_name": "Nvme0n1" 00:08:02.848 }, 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd1", 00:08:02.848 "bdev_name": "Nvme1n1" 00:08:02.848 }, 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd2", 00:08:02.848 "bdev_name": "Nvme2n1" 00:08:02.848 }, 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd3", 00:08:02.848 "bdev_name": "Nvme2n2" 00:08:02.848 }, 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd4", 00:08:02.848 "bdev_name": "Nvme2n3" 00:08:02.848 }, 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd5", 00:08:02.848 "bdev_name": "Nvme3n1" 00:08:02.848 } 00:08:02.848 ]' 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd0", 00:08:02.848 "bdev_name": "Nvme0n1" 00:08:02.848 }, 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd1", 00:08:02.848 "bdev_name": "Nvme1n1" 00:08:02.848 }, 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd2", 00:08:02.848 "bdev_name": "Nvme2n1" 00:08:02.848 }, 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd3", 00:08:02.848 "bdev_name": "Nvme2n2" 00:08:02.848 }, 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd4", 00:08:02.848 "bdev_name": "Nvme2n3" 00:08:02.848 }, 00:08:02.848 { 00:08:02.848 "nbd_device": "/dev/nbd5", 00:08:02.848 "bdev_name": "Nvme3n1" 00:08:02.848 } 00:08:02.848 ]' 00:08:02.848 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:03.108 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:03.108 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.109 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.369 15:08:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:03.629 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:03.629 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:03.629 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:03.629 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.629 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.629 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:03.629 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.629 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.629 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.629 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:03.888 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:03.888 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:03.888 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:03.888 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.888 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.888 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:03.888 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.888 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.888 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.888 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:04.148 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:04.148 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:04.148 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:04.148 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.148 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.148 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:04.148 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.148 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.148 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.148 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.408 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:04.668 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:04.668 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:04.668 15:08:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:04.668 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:04.928 /dev/nbd0 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.928 1+0 records in 00:08:04.928 1+0 records out 00:08:04.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608382 s, 6.7 MB/s 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:04.928 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:05.187 /dev/nbd1 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.187 1+0 records in 00:08:05.187 1+0 records out 00:08:05.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691266 s, 5.9 MB/s 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:05.187 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:05.481 /dev/nbd10 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.481 1+0 records in 00:08:05.481 1+0 records out 00:08:05.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728223 s, 5.6 MB/s 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:05.481 15:08:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:05.773 /dev/nbd11 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.773 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.773 1+0 records in 00:08:05.773 1+0 records out 00:08:05.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789637 s, 5.2 MB/s 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:05.774 /dev/nbd12 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.774 1+0 records in 00:08:05.774 1+0 records out 00:08:05.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054931 s, 7.5 MB/s 00:08:05.774 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:06.032 /dev/nbd13 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.032 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.291 1+0 records in 00:08:06.291 1+0 records out 00:08:06.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078487 s, 5.2 MB/s 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.291 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:06.291 { 00:08:06.291 "nbd_device": "/dev/nbd0", 00:08:06.291 "bdev_name": "Nvme0n1" 00:08:06.291 }, 00:08:06.291 { 00:08:06.291 "nbd_device": "/dev/nbd1", 00:08:06.291 "bdev_name": "Nvme1n1" 00:08:06.292 }, 00:08:06.292 { 00:08:06.292 "nbd_device": "/dev/nbd10", 00:08:06.292 "bdev_name": "Nvme2n1" 00:08:06.292 }, 00:08:06.292 { 00:08:06.292 "nbd_device": "/dev/nbd11", 00:08:06.292 "bdev_name": "Nvme2n2" 00:08:06.292 }, 00:08:06.292 { 00:08:06.292 "nbd_device": "/dev/nbd12", 00:08:06.292 "bdev_name": "Nvme2n3" 00:08:06.292 }, 00:08:06.292 { 00:08:06.292 "nbd_device": "/dev/nbd13", 00:08:06.292 "bdev_name": "Nvme3n1" 00:08:06.292 } 00:08:06.292 ]' 00:08:06.292 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:06.292 { 00:08:06.292 "nbd_device": "/dev/nbd0", 00:08:06.292 "bdev_name": "Nvme0n1" 00:08:06.292 }, 00:08:06.292 { 00:08:06.292 "nbd_device": "/dev/nbd1", 00:08:06.292 "bdev_name": "Nvme1n1" 00:08:06.292 }, 00:08:06.292 { 00:08:06.292 "nbd_device": "/dev/nbd10", 00:08:06.292 "bdev_name": "Nvme2n1" 00:08:06.292 }, 00:08:06.292 { 00:08:06.292 "nbd_device": "/dev/nbd11", 00:08:06.292 "bdev_name": "Nvme2n2" 00:08:06.292 }, 00:08:06.292 { 00:08:06.292 "nbd_device": "/dev/nbd12", 00:08:06.292 "bdev_name": "Nvme2n3" 00:08:06.292 }, 00:08:06.292 { 00:08:06.292 "nbd_device": "/dev/nbd13", 00:08:06.292 "bdev_name": "Nvme3n1" 00:08:06.292 } 00:08:06.292 ]' 00:08:06.292 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:06.551 /dev/nbd1 00:08:06.551 /dev/nbd10 00:08:06.551 /dev/nbd11 00:08:06.551 /dev/nbd12 00:08:06.551 /dev/nbd13' 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:06.551 /dev/nbd1 00:08:06.551 /dev/nbd10 00:08:06.551 /dev/nbd11 00:08:06.551 /dev/nbd12 00:08:06.551 /dev/nbd13' 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:06.551 256+0 records in 00:08:06.551 256+0 records out 00:08:06.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119045 s, 88.1 MB/s 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.551 15:08:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:06.551 256+0 records in 00:08:06.551 256+0 records out 00:08:06.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119651 s, 8.8 MB/s 00:08:06.551 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.551 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:06.810 256+0 records in 00:08:06.810 256+0 records out 00:08:06.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122653 s, 8.5 MB/s 00:08:06.810 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.810 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:06.810 256+0 records in 00:08:06.810 256+0 records out 00:08:06.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122823 s, 8.5 MB/s 00:08:06.810 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.810 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:07.069 256+0 records in 00:08:07.069 256+0 records out 00:08:07.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118969 s, 8.8 MB/s 00:08:07.069 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.070 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:07.070 256+0 records in 00:08:07.070 256+0 records out 00:08:07.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122206 s, 8.6 MB/s 00:08:07.070 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.070 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:07.329 256+0 records in 00:08:07.329 256+0 records out 00:08:07.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119401 s, 8.8 MB/s 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.329 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.589 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.589 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.589 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.589 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.589 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.589 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.589 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.589 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.589 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.589 15:08:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.848 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.848 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.848 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.848 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.848 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.848 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.848 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.848 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.848 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.848 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:08.107 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:08.107 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:08.107 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:08.107 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.107 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.107 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:08.107 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.107 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.107 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.107 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:08.366 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:08.366 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:08.366 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:08.366 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.366 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.366 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:08.366 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.366 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.366 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.366 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:08.625 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:08.625 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:08.625 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:08.625 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.625 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.625 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:08.625 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.625 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.625 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.625 15:08:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:08.884 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.885 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:09.171 malloc_lvol_verify 00:08:09.171 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:09.430 214c7eb0-232b-4133-bedf-04fc837d4ab0 00:08:09.430 15:08:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:09.690 1af09168-79a6-41f2-ab76-e4afef2b14b1 00:08:09.690 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:09.951 /dev/nbd0 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:09.951 mke2fs 1.47.0 (5-Feb-2023) 00:08:09.951 Discarding device blocks: 0/4096 done 00:08:09.951 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:09.951 00:08:09.951 Allocating group tables: 0/1 done 00:08:09.951 Writing inode tables: 0/1 done 00:08:09.951 Creating journal (1024 blocks): done 00:08:09.951 Writing superblocks and filesystem accounting information: 0/1 done 00:08:09.951 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.951 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.210 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73321 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73321 ']' 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73321 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73321 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.211 killing process with pid 73321 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73321' 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73321 00:08:10.211 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73321 00:08:10.470 15:08:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:10.470 00:08:10.470 real 0m10.470s 00:08:10.470 user 0m14.026s 00:08:10.470 sys 0m4.770s 00:08:10.470 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.470 15:08:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:10.470 ************************************ 00:08:10.470 END TEST bdev_nbd 00:08:10.470 ************************************ 00:08:10.729 15:08:09 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:10.729 15:08:09 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:10.729 skipping fio tests on NVMe due to multi-ns failures. 00:08:10.729 15:08:09 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:10.729 15:08:09 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:10.729 15:08:09 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:10.729 15:08:09 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:10.729 15:08:09 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.729 15:08:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.729 ************************************ 00:08:10.729 START TEST bdev_verify 00:08:10.729 ************************************ 00:08:10.729 15:08:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:10.729 [2024-10-01 15:08:09.136828] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:10.729 [2024-10-01 15:08:09.136977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73700 ] 00:08:10.987 [2024-10-01 15:08:09.305503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.987 [2024-10-01 15:08:09.356873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.987 [2024-10-01 15:08:09.356974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.245 Running I/O for 5 seconds... 00:08:16.423 21888.00 IOPS, 85.50 MiB/s 20096.00 IOPS, 78.50 MiB/s 19520.00 IOPS, 76.25 MiB/s 18992.00 IOPS, 74.19 MiB/s 18777.60 IOPS, 73.35 MiB/s 00:08:16.423 Latency(us) 00:08:16.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.423 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0x0 length 0xbd0bd 00:08:16.423 Nvme0n1 : 5.05 1775.50 6.94 0.00 0.00 71880.89 17265.71 69062.84 00:08:16.423 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:16.423 Nvme0n1 : 5.08 1309.56 5.12 0.00 0.00 97575.02 16002.36 101488.68 00:08:16.423 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0x0 length 0xa0000 00:08:16.423 Nvme1n1 : 5.05 1774.99 6.93 0.00 0.00 71779.21 19160.73 61482.77 00:08:16.423 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0xa0000 length 0xa0000 00:08:16.423 Nvme1n1 : 5.09 1308.73 5.11 0.00 0.00 97518.00 17476.27 96435.30 00:08:16.423 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0x0 length 0x80000 00:08:16.423 Nvme2n1 : 5.07 1780.21 6.95 0.00 0.00 71334.22 7053.67 59377.20 00:08:16.423 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0x80000 length 0x80000 00:08:16.423 Nvme2n1 : 5.09 1308.07 5.11 0.00 0.00 97289.25 18950.17 91803.04 00:08:16.423 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0x0 length 0x80000 00:08:16.423 Nvme2n2 : 5.07 1779.57 6.95 0.00 0.00 71239.13 7106.31 60640.54 00:08:16.423 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0x80000 length 0x80000 00:08:16.423 Nvme2n2 : 5.09 1307.77 5.11 0.00 0.00 97166.16 19055.45 92645.27 00:08:16.423 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0x0 length 0x80000 00:08:16.423 Nvme2n3 : 5.08 1787.57 6.98 0.00 0.00 70952.71 9738.28 63167.23 00:08:16.423 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0x80000 length 0x80000 00:08:16.423 Nvme2n3 : 5.09 1307.47 5.11 0.00 0.00 97030.72 18529.05 96856.42 00:08:16.423 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0x0 length 0x20000 00:08:16.423 Nvme3n1 : 5.09 1787.02 6.98 0.00 0.00 70873.77 8685.49 63167.23 00:08:16.423 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.423 Verification LBA range: start 0x20000 length 0x20000 00:08:16.423 Nvme3n1 : 5.09 1307.15 5.11 0.00 0.00 96902.11 14107.35 101909.80 00:08:16.423 =================================================================================================================== 00:08:16.423 Total : 18533.62 72.40 0.00 0.00 82337.87 7053.67 101909.80 00:08:16.991 00:08:16.991 real 0m6.354s 00:08:16.991 user 0m11.787s 00:08:16.991 sys 0m0.306s 00:08:16.991 15:08:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.991 15:08:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:16.991 ************************************ 00:08:16.991 END TEST bdev_verify 00:08:16.991 ************************************ 00:08:16.991 15:08:15 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.991 15:08:15 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:16.991 15:08:15 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.991 15:08:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.991 ************************************ 00:08:16.991 START TEST bdev_verify_big_io 00:08:16.991 ************************************ 00:08:16.991 15:08:15 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:17.249 [2024-10-01 15:08:15.559746] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:17.249 [2024-10-01 15:08:15.559879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73787 ] 00:08:17.249 [2024-10-01 15:08:15.718550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.249 [2024-10-01 15:08:15.770546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.249 [2024-10-01 15:08:15.770644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.814 Running I/O for 5 seconds... 00:08:23.447 1926.00 IOPS, 120.38 MiB/s 3244.50 IOPS, 202.78 MiB/s 3714.00 IOPS, 232.12 MiB/s 00:08:23.447 Latency(us) 00:08:23.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.447 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.447 Verification LBA range: start 0x0 length 0xbd0b 00:08:23.447 Nvme0n1 : 5.45 211.42 13.21 0.00 0.00 579143.47 15370.69 774851.34 00:08:23.447 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.447 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:23.447 Nvme0n1 : 5.63 125.04 7.82 0.00 0.00 994815.21 28004.14 970248.64 00:08:23.447 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.447 Verification LBA range: start 0x0 length 0xa000 00:08:23.447 Nvme1n1 : 5.51 220.80 13.80 0.00 0.00 551339.61 52218.24 636725.67 00:08:23.447 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.447 Verification LBA range: start 0xa000 length 0xa000 00:08:23.447 Nvme1n1 : 5.63 124.98 7.81 0.00 0.00 973508.39 27583.02 963510.80 00:08:23.447 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.447 Verification LBA range: start 0x0 length 0x8000 00:08:23.447 Nvme2n1 : 5.63 214.33 13.40 0.00 0.00 550654.64 53060.47 1131956.74 00:08:23.447 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.447 Verification LBA range: start 0x8000 length 0x8000 00:08:23.447 Nvme2n1 : 5.64 129.03 8.06 0.00 0.00 935316.51 5053.38 936559.45 00:08:23.447 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.447 Verification LBA range: start 0x0 length 0x8000 00:08:23.447 Nvme2n2 : 5.65 218.60 13.66 0.00 0.00 528988.86 61061.65 1145432.42 00:08:23.447 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.447 Verification LBA range: start 0x8000 length 0x8000 00:08:23.447 Nvme2n2 : 5.65 119.44 7.46 0.00 0.00 985882.92 6237.76 2048302.68 00:08:23.447 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.448 Verification LBA range: start 0x0 length 0x8000 00:08:23.448 Nvme2n3 : 5.67 230.09 14.38 0.00 0.00 492935.37 10054.12 1158908.09 00:08:23.448 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.448 Verification LBA range: start 0x8000 length 0x8000 00:08:23.448 Nvme2n3 : 5.65 132.15 8.26 0.00 0.00 871226.59 6711.52 1098267.55 00:08:23.448 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.448 Verification LBA range: start 0x0 length 0x2000 00:08:23.448 Nvme3n1 : 5.71 256.04 16.00 0.00 0.00 434761.83 819.20 1172383.77 00:08:23.448 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.448 Verification LBA range: start 0x2000 length 0x2000 00:08:23.448 Nvme3n1 : 5.66 135.81 8.49 0.00 0.00 828752.19 6632.56 1105005.39 00:08:23.448 =================================================================================================================== 00:08:23.448 Total : 2117.73 132.36 0.00 0.00 668260.57 819.20 2048302.68 00:08:24.436 00:08:24.436 real 0m7.232s 00:08:24.436 user 0m13.514s 00:08:24.436 sys 0m0.331s 00:08:24.436 15:08:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.436 15:08:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:24.436 ************************************ 00:08:24.436 END TEST bdev_verify_big_io 00:08:24.436 ************************************ 00:08:24.436 15:08:22 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.436 15:08:22 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:24.436 15:08:22 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.436 15:08:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.436 ************************************ 00:08:24.436 START TEST bdev_write_zeroes 00:08:24.436 ************************************ 00:08:24.436 15:08:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.436 [2024-10-01 15:08:22.863229] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:24.436 [2024-10-01 15:08:22.863419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73885 ] 00:08:24.695 [2024-10-01 15:08:23.034815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.695 [2024-10-01 15:08:23.088643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.262 Running I/O for 1 seconds... 00:08:26.195 71808.00 IOPS, 280.50 MiB/s 00:08:26.195 Latency(us) 00:08:26.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.195 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.195 Nvme0n1 : 1.02 11922.40 46.57 0.00 0.00 10706.31 7685.35 29056.93 00:08:26.195 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.195 Nvme1n1 : 1.02 11910.23 46.52 0.00 0.00 10705.36 8632.85 23161.32 00:08:26.195 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.195 Nvme2n1 : 1.02 11898.53 46.48 0.00 0.00 10669.78 8422.30 20213.51 00:08:26.195 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.195 Nvme2n2 : 1.02 11938.33 46.63 0.00 0.00 10591.15 6132.49 17476.27 00:08:26.195 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.195 Nvme2n3 : 1.02 11926.71 46.59 0.00 0.00 10583.77 6316.72 17476.27 00:08:26.195 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.195 Nvme3n1 : 1.03 11915.55 46.55 0.00 0.00 10577.19 6500.96 18529.05 00:08:26.195 =================================================================================================================== 00:08:26.195 Total : 71511.76 279.34 0.00 0.00 10638.78 6132.49 29056.93 00:08:26.454 00:08:26.454 real 0m2.036s 00:08:26.454 user 0m1.666s 00:08:26.454 sys 0m0.257s 00:08:26.454 15:08:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.454 15:08:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:26.454 ************************************ 00:08:26.454 END TEST bdev_write_zeroes 00:08:26.454 ************************************ 00:08:26.454 15:08:24 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.454 15:08:24 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:26.454 15:08:24 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.454 15:08:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.454 ************************************ 00:08:26.454 START TEST bdev_json_nonenclosed 00:08:26.454 ************************************ 00:08:26.454 15:08:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.454 [2024-10-01 15:08:24.975902] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:26.454 [2024-10-01 15:08:24.976051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73927 ] 00:08:26.713 [2024-10-01 15:08:25.148161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.713 [2024-10-01 15:08:25.201821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.713 [2024-10-01 15:08:25.201943] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:26.713 [2024-10-01 15:08:25.201976] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:26.713 [2024-10-01 15:08:25.201997] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.972 00:08:26.972 real 0m0.451s 00:08:26.972 user 0m0.185s 00:08:26.972 sys 0m0.160s 00:08:26.973 15:08:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.973 15:08:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:26.973 ************************************ 00:08:26.973 END TEST bdev_json_nonenclosed 00:08:26.973 ************************************ 00:08:26.973 15:08:25 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.973 15:08:25 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:26.973 15:08:25 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.973 15:08:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.973 ************************************ 00:08:26.973 START TEST bdev_json_nonarray 00:08:26.973 ************************************ 00:08:26.973 15:08:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.973 [2024-10-01 15:08:25.503691] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:26.973 [2024-10-01 15:08:25.503841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73947 ] 00:08:27.232 [2024-10-01 15:08:25.674407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.232 [2024-10-01 15:08:25.728304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.232 [2024-10-01 15:08:25.728429] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:27.232 [2024-10-01 15:08:25.728463] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:27.232 [2024-10-01 15:08:25.728478] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.491 00:08:27.491 real 0m0.433s 00:08:27.491 user 0m0.189s 00:08:27.491 sys 0m0.141s 00:08:27.491 15:08:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.491 15:08:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:27.491 ************************************ 00:08:27.491 END TEST bdev_json_nonarray 00:08:27.491 ************************************ 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:27.491 15:08:25 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:27.491 00:08:27.491 real 0m32.284s 00:08:27.491 user 0m48.054s 00:08:27.491 sys 0m7.752s 00:08:27.491 15:08:25 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.491 15:08:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.491 ************************************ 00:08:27.491 END TEST blockdev_nvme 00:08:27.491 ************************************ 00:08:27.491 15:08:25 -- spdk/autotest.sh@209 -- # uname -s 00:08:27.491 15:08:25 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:27.491 15:08:25 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:27.491 15:08:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:27.491 15:08:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.491 15:08:25 -- common/autotest_common.sh@10 -- # set +x 00:08:27.491 ************************************ 00:08:27.491 START TEST blockdev_nvme_gpt 00:08:27.491 ************************************ 00:08:27.491 15:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:27.750 * Looking for test storage... 00:08:27.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:27.750 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:27.750 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:27.750 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:08:27.750 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.750 15:08:26 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:27.750 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.750 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:27.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.750 --rc genhtml_branch_coverage=1 00:08:27.750 --rc genhtml_function_coverage=1 00:08:27.750 --rc genhtml_legend=1 00:08:27.750 --rc geninfo_all_blocks=1 00:08:27.750 --rc geninfo_unexecuted_blocks=1 00:08:27.750 00:08:27.750 ' 00:08:27.750 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:27.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.750 --rc genhtml_branch_coverage=1 00:08:27.750 --rc genhtml_function_coverage=1 00:08:27.750 --rc genhtml_legend=1 00:08:27.750 --rc geninfo_all_blocks=1 00:08:27.750 --rc geninfo_unexecuted_blocks=1 00:08:27.750 00:08:27.750 ' 00:08:27.750 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:27.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.750 --rc genhtml_branch_coverage=1 00:08:27.750 --rc genhtml_function_coverage=1 00:08:27.750 --rc genhtml_legend=1 00:08:27.750 --rc geninfo_all_blocks=1 00:08:27.750 --rc geninfo_unexecuted_blocks=1 00:08:27.750 00:08:27.750 ' 00:08:27.750 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:27.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.750 --rc genhtml_branch_coverage=1 00:08:27.750 --rc genhtml_function_coverage=1 00:08:27.750 --rc genhtml_legend=1 00:08:27.750 --rc geninfo_all_blocks=1 00:08:27.750 --rc geninfo_unexecuted_blocks=1 00:08:27.750 00:08:27.750 ' 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:27.750 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74031 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:27.751 15:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74031 00:08:27.751 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 74031 ']' 00:08:27.751 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.751 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.751 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.751 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.751 15:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:28.009 [2024-10-01 15:08:26.372236] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:28.009 [2024-10-01 15:08:26.372407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74031 ] 00:08:28.010 [2024-10-01 15:08:26.536078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.268 [2024-10-01 15:08:26.589754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.835 15:08:27 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.835 15:08:27 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:08:28.835 15:08:27 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:28.835 15:08:27 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:28.835 15:08:27 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:29.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:29.661 Waiting for block devices as requested 00:08:29.661 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.919 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.919 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.919 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:35.235 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:35.235 BYT; 00:08:35.235 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:35.235 BYT; 00:08:35.235 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:35.235 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:35.235 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:35.235 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:35.235 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:35.235 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:35.235 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:35.236 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:35.236 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:35.236 15:08:33 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:35.236 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:35.236 15:08:33 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:36.614 The operation has completed successfully. 00:08:36.614 15:08:34 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:37.550 The operation has completed successfully. 00:08:37.550 15:08:35 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:38.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:38.683 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.683 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.683 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.941 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.941 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:38.941 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.941 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.941 [] 00:08:38.941 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.941 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:38.941 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:38.941 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:38.941 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:38.941 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:38.941 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.941 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.200 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.200 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:39.200 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.200 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.460 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:39.460 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.460 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.460 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.460 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:39.460 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.460 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:39.460 15:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.460 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:39.460 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:39.461 15:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "40fe16b4-6358-40a2-a9cc-bb3de428fefb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "40fe16b4-6358-40a2-a9cc-bb3de428fefb",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5fb3efce-a7ef-434d-a310-cab6f6deeaa1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5fb3efce-a7ef-434d-a310-cab6f6deeaa1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cb5824f2-80d8-4bdf-892b-6bb895bb87b1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cb5824f2-80d8-4bdf-892b-6bb895bb87b1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "dd2502d9-2d04-4345-aa65-41ec4e990d2f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dd2502d9-2d04-4345-aa65-41ec4e990d2f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "055de151-5240-477a-9359-a010b45f96e9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "055de151-5240-477a-9359-a010b45f96e9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:39.720 15:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:39.720 15:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:39.720 15:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:39.720 15:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 74031 00:08:39.720 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 74031 ']' 00:08:39.720 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 74031 00:08:39.720 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:08:39.720 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.720 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74031 00:08:39.720 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:39.720 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:39.720 killing process with pid 74031 00:08:39.720 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74031' 00:08:39.720 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 74031 00:08:39.720 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 74031 00:08:39.980 15:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:39.980 15:08:38 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:39.980 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:39.980 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.980 15:08:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.980 ************************************ 00:08:39.980 START TEST bdev_hello_world 00:08:39.980 ************************************ 00:08:39.980 15:08:38 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:40.239 [2024-10-01 15:08:38.605869] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:40.239 [2024-10-01 15:08:38.606020] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74656 ] 00:08:40.239 [2024-10-01 15:08:38.778255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.530 [2024-10-01 15:08:38.831862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.790 [2024-10-01 15:08:39.229390] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:40.790 [2024-10-01 15:08:39.229470] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:40.790 [2024-10-01 15:08:39.229526] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:40.790 [2024-10-01 15:08:39.232079] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:40.790 [2024-10-01 15:08:39.232779] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:40.790 [2024-10-01 15:08:39.232817] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:40.790 [2024-10-01 15:08:39.233062] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:40.790 00:08:40.790 [2024-10-01 15:08:39.233103] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:41.049 00:08:41.049 real 0m0.958s 00:08:41.049 user 0m0.606s 00:08:41.049 sys 0m0.246s 00:08:41.049 15:08:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.049 15:08:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:41.049 ************************************ 00:08:41.049 END TEST bdev_hello_world 00:08:41.049 ************************************ 00:08:41.049 15:08:39 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:41.049 15:08:39 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:41.049 15:08:39 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.049 15:08:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.049 ************************************ 00:08:41.049 START TEST bdev_bounds 00:08:41.049 ************************************ 00:08:41.049 15:08:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:08:41.049 Process bdevio pid: 74687 00:08:41.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.049 15:08:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74687 00:08:41.049 15:08:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:41.049 15:08:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:41.049 15:08:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74687' 00:08:41.049 15:08:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74687 00:08:41.049 15:08:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 74687 ']' 00:08:41.049 15:08:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.050 15:08:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.050 15:08:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.050 15:08:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.050 15:08:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:41.308 [2024-10-01 15:08:39.622374] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:41.308 [2024-10-01 15:08:39.622752] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74687 ] 00:08:41.308 [2024-10-01 15:08:39.813673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:41.568 [2024-10-01 15:08:39.870310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.568 [2024-10-01 15:08:39.870333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.568 [2024-10-01 15:08:39.870473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.137 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.137 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:08:42.137 15:08:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:42.397 I/O targets: 00:08:42.397 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:42.397 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:42.397 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:42.397 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:42.397 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:42.397 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:42.397 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:42.397 00:08:42.397 00:08:42.397 CUnit - A unit testing framework for C - Version 2.1-3 00:08:42.397 http://cunit.sourceforge.net/ 00:08:42.397 00:08:42.397 00:08:42.397 Suite: bdevio tests on: Nvme3n1 00:08:42.397 Test: blockdev write read block ...passed 00:08:42.397 Test: blockdev write zeroes read block ...passed 00:08:42.397 Test: blockdev write zeroes read no split ...passed 00:08:42.397 Test: blockdev write zeroes read split ...passed 00:08:42.397 Test: blockdev write zeroes read split partial ...passed 00:08:42.397 Test: blockdev reset ...[2024-10-01 15:08:40.713482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:42.397 passed 00:08:42.397 Test: blockdev write read 8 blocks ...[2024-10-01 15:08:40.715496] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.397 passed 00:08:42.397 Test: blockdev write read size > 128k ...passed 00:08:42.397 Test: blockdev write read invalid size ...passed 00:08:42.397 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.397 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.397 Test: blockdev write read max offset ...passed 00:08:42.397 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.397 Test: blockdev writev readv 8 blocks ...passed 00:08:42.397 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.397 Test: blockdev writev readv block ...passed 00:08:42.397 Test: blockdev writev readv size > 128k ...passed 00:08:42.397 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.397 Test: blockdev comparev and writev ...[2024-10-01 15:08:40.720609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf40e000 len:0x1000 00:08:42.397 [2024-10-01 15:08:40.720668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.397 passed 00:08:42.397 Test: blockdev nvme passthru rw ...passed 00:08:42.397 Test: blockdev nvme passthru vendor specific ...passed 00:08:42.397 Test: blockdev nvme admin passthru ...[2024-10-01 15:08:40.721275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:42.397 [2024-10-01 15:08:40.721318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:42.397 passed 00:08:42.397 Test: blockdev copy ...passed 00:08:42.397 Suite: bdevio tests on: Nvme2n3 00:08:42.397 Test: blockdev write read block ...passed 00:08:42.397 Test: blockdev write zeroes read block ...passed 00:08:42.397 Test: blockdev write zeroes read no split ...passed 00:08:42.397 Test: blockdev write zeroes read split ...passed 00:08:42.397 Test: blockdev write zeroes read split partial ...passed 00:08:42.397 Test: blockdev reset ...[2024-10-01 15:08:40.738005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:42.397 [2024-10-01 15:08:40.740284] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.397 passed 00:08:42.397 Test: blockdev write read 8 blocks ...passed 00:08:42.397 Test: blockdev write read size > 128k ...passed 00:08:42.397 Test: blockdev write read invalid size ...passed 00:08:42.397 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.397 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.397 Test: blockdev write read max offset ...passed 00:08:42.397 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.397 Test: blockdev writev readv 8 blocks ...passed 00:08:42.397 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.397 Test: blockdev writev readv block ...passed 00:08:42.397 Test: blockdev writev readv size > 128k ...passed 00:08:42.397 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.397 Test: blockdev comparev and writev ...[2024-10-01 15:08:40.745764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:08:42.397 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2bf40a000 len:0x1000 00:08:42.397 [2024-10-01 15:08:40.745953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.397 passed 00:08:42.397 Test: blockdev nvme passthru vendor specific ...passed 00:08:42.397 Test: blockdev nvme admin passthru ...[2024-10-01 15:08:40.746549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:42.397 [2024-10-01 15:08:40.746583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:42.397 passed 00:08:42.397 Test: blockdev copy ...passed 00:08:42.397 Suite: bdevio tests on: Nvme2n2 00:08:42.397 Test: blockdev write read block ...passed 00:08:42.397 Test: blockdev write zeroes read block ...passed 00:08:42.397 Test: blockdev write zeroes read no split ...passed 00:08:42.397 Test: blockdev write zeroes read split ...passed 00:08:42.397 Test: blockdev write zeroes read split partial ...passed 00:08:42.397 Test: blockdev reset ...[2024-10-01 15:08:40.765123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:42.397 [2024-10-01 15:08:40.767476] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.397 passed 00:08:42.397 Test: blockdev write read 8 blocks ...passed 00:08:42.397 Test: blockdev write read size > 128k ...passed 00:08:42.397 Test: blockdev write read invalid size ...passed 00:08:42.397 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.397 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.397 Test: blockdev write read max offset ...passed 00:08:42.397 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.397 Test: blockdev writev readv 8 blocks ...passed 00:08:42.397 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.397 Test: blockdev writev readv block ...passed 00:08:42.397 Test: blockdev writev readv size > 128k ...passed 00:08:42.397 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.397 Test: blockdev comparev and writev ...[2024-10-01 15:08:40.775351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3405000 len:0x1000 00:08:42.397 [2024-10-01 15:08:40.775407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.397 passed 00:08:42.397 Test: blockdev nvme passthru rw ...passed 00:08:42.397 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:08:40.776427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:42.398 [2024-10-01 15:08:40.776466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:42.398 passed 00:08:42.398 Test: blockdev nvme admin passthru ...passed 00:08:42.398 Test: blockdev copy ...passed 00:08:42.398 Suite: bdevio tests on: Nvme2n1 00:08:42.398 Test: blockdev write read block ...passed 00:08:42.398 Test: blockdev write zeroes read block ...passed 00:08:42.398 Test: blockdev write zeroes read no split ...passed 00:08:42.398 Test: blockdev write zeroes read split ...passed 00:08:42.398 Test: blockdev write zeroes read split partial ...passed 00:08:42.398 Test: blockdev reset ...[2024-10-01 15:08:40.806766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:42.398 [2024-10-01 15:08:40.809053] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.398 passed 00:08:42.398 Test: blockdev write read 8 blocks ...passed 00:08:42.398 Test: blockdev write read size > 128k ...passed 00:08:42.398 Test: blockdev write read invalid size ...passed 00:08:42.398 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.398 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.398 Test: blockdev write read max offset ...passed 00:08:42.398 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.398 Test: blockdev writev readv 8 blocks ...passed 00:08:42.398 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.398 Test: blockdev writev readv block ...passed 00:08:42.398 Test: blockdev writev readv size > 128k ...passed 00:08:42.398 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.398 Test: blockdev comparev and writev ...[2024-10-01 15:08:40.816038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf002000 len:0x1000 00:08:42.398 [2024-10-01 15:08:40.816096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.398 passed 00:08:42.398 Test: blockdev nvme passthru rw ...passed 00:08:42.398 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:08:40.816901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:42.398 [2024-10-01 15:08:40.816940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:42.398 passed 00:08:42.398 Test: blockdev nvme admin passthru ...passed 00:08:42.398 Test: blockdev copy ...passed 00:08:42.398 Suite: bdevio tests on: Nvme1n1p2 00:08:42.398 Test: blockdev write read block ...passed 00:08:42.398 Test: blockdev write zeroes read block ...passed 00:08:42.398 Test: blockdev write zeroes read no split ...passed 00:08:42.398 Test: blockdev write zeroes read split ...passed 00:08:42.398 Test: blockdev write zeroes read split partial ...passed 00:08:42.398 Test: blockdev reset ...[2024-10-01 15:08:40.848828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:42.398 [2024-10-01 15:08:40.850902] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.398 passed 00:08:42.398 Test: blockdev write read 8 blocks ...passed 00:08:42.398 Test: blockdev write read size > 128k ...passed 00:08:42.398 Test: blockdev write read invalid size ...passed 00:08:42.398 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.398 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.398 Test: blockdev write read max offset ...passed 00:08:42.398 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.398 Test: blockdev writev readv 8 blocks ...passed 00:08:42.398 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.398 Test: blockdev writev readv block ...passed 00:08:42.398 Test: blockdev writev readv size > 128k ...passed 00:08:42.398 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.398 Test: blockdev comparev and writev ...[2024-10-01 15:08:40.858477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d6e3b000 len:0x1000 00:08:42.398 [2024-10-01 15:08:40.858533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.398 passed 00:08:42.398 Test: blockdev nvme passthru rw ...passed 00:08:42.398 Test: blockdev nvme passthru vendor specific ...passed 00:08:42.398 Test: blockdev nvme admin passthru ...passed 00:08:42.398 Test: blockdev copy ...passed 00:08:42.398 Suite: bdevio tests on: Nvme1n1p1 00:08:42.398 Test: blockdev write read block ...passed 00:08:42.398 Test: blockdev write zeroes read block ...passed 00:08:42.398 Test: blockdev write zeroes read no split ...passed 00:08:42.398 Test: blockdev write zeroes read split ...passed 00:08:42.398 Test: blockdev write zeroes read split partial ...passed 00:08:42.398 Test: blockdev reset ...[2024-10-01 15:08:40.875530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:42.398 [2024-10-01 15:08:40.877492] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.398 passed 00:08:42.398 Test: blockdev write read 8 blocks ...passed 00:08:42.398 Test: blockdev write read size > 128k ...passed 00:08:42.398 Test: blockdev write read invalid size ...passed 00:08:42.398 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.398 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.398 Test: blockdev write read max offset ...passed 00:08:42.398 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.398 Test: blockdev writev readv 8 blocks ...passed 00:08:42.398 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.398 Test: blockdev writev readv block ...passed 00:08:42.398 Test: blockdev writev readv size > 128k ...passed 00:08:42.398 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.398 Test: blockdev comparev and writev ...[2024-10-01 15:08:40.883455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d6e37000 len:0x1000 00:08:42.398 [2024-10-01 15:08:40.883508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.398 passed 00:08:42.398 Test: blockdev nvme passthru rw ...passed 00:08:42.398 Test: blockdev nvme passthru vendor specific ...passed 00:08:42.398 Test: blockdev nvme admin passthru ...passed 00:08:42.398 Test: blockdev copy ...passed 00:08:42.398 Suite: bdevio tests on: Nvme0n1 00:08:42.398 Test: blockdev write read block ...passed 00:08:42.398 Test: blockdev write zeroes read block ...passed 00:08:42.398 Test: blockdev write zeroes read no split ...passed 00:08:42.398 Test: blockdev write zeroes read split ...passed 00:08:42.398 Test: blockdev write zeroes read split partial ...passed 00:08:42.398 Test: blockdev reset ...[2024-10-01 15:08:40.898605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:42.398 passed 00:08:42.398 Test: blockdev write read 8 blocks ...[2024-10-01 15:08:40.900476] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.398 passed 00:08:42.398 Test: blockdev write read size > 128k ...passed 00:08:42.398 Test: blockdev write read invalid size ...passed 00:08:42.398 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.398 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.398 Test: blockdev write read max offset ...passed 00:08:42.398 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.398 Test: blockdev writev readv 8 blocks ...passed 00:08:42.398 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.398 Test: blockdev writev readv block ...passed 00:08:42.398 Test: blockdev writev readv size > 128k ...passed 00:08:42.398 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.398 Test: blockdev comparev and writev ...passed 00:08:42.398 Test: blockdev nvme passthru rw ...[2024-10-01 15:08:40.904701] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:42.398 separate metadata which is not supported yet. 00:08:42.398 passed 00:08:42.398 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:08:40.905102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:42.398 passed 00:08:42.398 Test: blockdev nvme admin passthru ...[2024-10-01 15:08:40.905148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:42.398 passed 00:08:42.398 Test: blockdev copy ...passed 00:08:42.398 00:08:42.398 Run Summary: Type Total Ran Passed Failed Inactive 00:08:42.398 suites 7 7 n/a 0 0 00:08:42.398 tests 161 161 161 0 0 00:08:42.398 asserts 1025 1025 1025 0 n/a 00:08:42.398 00:08:42.398 Elapsed time = 0.506 seconds 00:08:42.398 0 00:08:42.398 15:08:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74687 00:08:42.398 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 74687 ']' 00:08:42.398 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 74687 00:08:42.398 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:08:42.398 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.658 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74687 00:08:42.658 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.658 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.658 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74687' 00:08:42.658 killing process with pid 74687 00:08:42.658 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 74687 00:08:42.658 15:08:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 74687 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:42.917 00:08:42.917 real 0m1.671s 00:08:42.917 user 0m4.130s 00:08:42.917 sys 0m0.421s 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.917 ************************************ 00:08:42.917 END TEST bdev_bounds 00:08:42.917 ************************************ 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:42.917 15:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:42.917 15:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:42.917 15:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.917 15:08:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:42.917 ************************************ 00:08:42.917 START TEST bdev_nbd 00:08:42.917 ************************************ 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74736 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74736 /var/tmp/spdk-nbd.sock 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 74736 ']' 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:42.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.917 15:08:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:42.917 [2024-10-01 15:08:41.400151] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:42.917 [2024-10-01 15:08:41.400557] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.176 [2024-10-01 15:08:41.560539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.176 [2024-10-01 15:08:41.617264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.744 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.003 1+0 records in 00:08:44.003 1+0 records out 00:08:44.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760823 s, 5.4 MB/s 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.003 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.572 1+0 records in 00:08:44.572 1+0 records out 00:08:44.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063889 s, 6.4 MB/s 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.572 15:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:44.572 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:44.572 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:44.572 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:44.572 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:44.572 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:44.572 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:44.572 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:44.572 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:44.573 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:44.573 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:44.573 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:44.573 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:44.573 1+0 records in 00:08:44.573 1+0 records out 00:08:44.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801747 s, 5.1 MB/s 00:08:44.832 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.832 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:44.832 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:44.832 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:44.832 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:44.832 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:44.832 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.832 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:45.090 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:45.090 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:45.090 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:45.090 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.091 1+0 records in 00:08:45.091 1+0 records out 00:08:45.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726367 s, 5.6 MB/s 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.091 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.352 1+0 records in 00:08:45.352 1+0 records out 00:08:45.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066075 s, 6.2 MB/s 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.352 15:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.626 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.626 1+0 records in 00:08:45.626 1+0 records out 00:08:45.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554178 s, 7.4 MB/s 00:08:45.627 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.627 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:45.627 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.627 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.627 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:45.627 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.627 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.627 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.884 1+0 records in 00:08:45.884 1+0 records out 00:08:45.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612077 s, 6.7 MB/s 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.884 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd0", 00:08:46.143 "bdev_name": "Nvme0n1" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd1", 00:08:46.143 "bdev_name": "Nvme1n1p1" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd2", 00:08:46.143 "bdev_name": "Nvme1n1p2" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd3", 00:08:46.143 "bdev_name": "Nvme2n1" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd4", 00:08:46.143 "bdev_name": "Nvme2n2" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd5", 00:08:46.143 "bdev_name": "Nvme2n3" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd6", 00:08:46.143 "bdev_name": "Nvme3n1" 00:08:46.143 } 00:08:46.143 ]' 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd0", 00:08:46.143 "bdev_name": "Nvme0n1" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd1", 00:08:46.143 "bdev_name": "Nvme1n1p1" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd2", 00:08:46.143 "bdev_name": "Nvme1n1p2" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd3", 00:08:46.143 "bdev_name": "Nvme2n1" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd4", 00:08:46.143 "bdev_name": "Nvme2n2" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd5", 00:08:46.143 "bdev_name": "Nvme2n3" 00:08:46.143 }, 00:08:46.143 { 00:08:46.143 "nbd_device": "/dev/nbd6", 00:08:46.143 "bdev_name": "Nvme3n1" 00:08:46.143 } 00:08:46.143 ]' 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.143 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:46.402 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:46.402 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:46.402 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:46.402 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.402 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.402 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:46.402 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:46.402 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.402 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.402 15:08:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:46.661 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:46.661 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:46.661 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:46.661 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.661 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.661 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:46.661 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:46.661 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.661 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.661 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:46.919 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:46.919 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:46.919 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:46.919 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.920 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.920 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:46.920 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:46.920 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.920 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.920 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:47.177 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:47.177 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:47.177 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:47.177 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.177 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.177 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:47.177 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.177 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.177 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.177 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:47.436 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:47.436 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:47.436 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:47.436 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.436 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.436 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:47.436 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.436 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.436 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.436 15:08:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:47.696 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:47.696 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:47.696 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:47.696 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.696 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.696 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:47.696 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.696 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.696 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.696 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.955 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.214 15:08:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:48.472 /dev/nbd0 00:08:48.472 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:48.472 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:48.472 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:48.472 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:48.472 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:48.472 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:48.472 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:48.731 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:48.731 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:48.731 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:48.731 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.731 1+0 records in 00:08:48.731 1+0 records out 00:08:48.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504799 s, 8.1 MB/s 00:08:48.731 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.731 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:48.731 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.731 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:48.731 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:48.731 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.732 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.732 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:48.732 /dev/nbd1 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.991 1+0 records in 00:08:48.991 1+0 records out 00:08:48.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697699 s, 5.9 MB/s 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:48.991 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:49.252 /dev/nbd10 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.252 1+0 records in 00:08:49.252 1+0 records out 00:08:49.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000869413 s, 4.7 MB/s 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:49.252 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:49.512 /dev/nbd11 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.512 1+0 records in 00:08:49.512 1+0 records out 00:08:49.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567282 s, 7.2 MB/s 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:49.512 15:08:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:49.771 /dev/nbd12 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.771 1+0 records in 00:08:49.771 1+0 records out 00:08:49.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110498 s, 3.7 MB/s 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:49.771 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:50.029 /dev/nbd13 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.029 1+0 records in 00:08:50.029 1+0 records out 00:08:50.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745945 s, 5.5 MB/s 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.029 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:50.287 /dev/nbd14 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.287 1+0 records in 00:08:50.287 1+0 records out 00:08:50.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000920554 s, 4.4 MB/s 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.287 15:08:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:50.546 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:50.546 { 00:08:50.546 "nbd_device": "/dev/nbd0", 00:08:50.546 "bdev_name": "Nvme0n1" 00:08:50.546 }, 00:08:50.546 { 00:08:50.546 "nbd_device": "/dev/nbd1", 00:08:50.546 "bdev_name": "Nvme1n1p1" 00:08:50.546 }, 00:08:50.546 { 00:08:50.546 "nbd_device": "/dev/nbd10", 00:08:50.546 "bdev_name": "Nvme1n1p2" 00:08:50.546 }, 00:08:50.546 { 00:08:50.546 "nbd_device": "/dev/nbd11", 00:08:50.546 "bdev_name": "Nvme2n1" 00:08:50.546 }, 00:08:50.546 { 00:08:50.546 "nbd_device": "/dev/nbd12", 00:08:50.546 "bdev_name": "Nvme2n2" 00:08:50.546 }, 00:08:50.546 { 00:08:50.546 "nbd_device": "/dev/nbd13", 00:08:50.546 "bdev_name": "Nvme2n3" 00:08:50.546 }, 00:08:50.546 { 00:08:50.546 "nbd_device": "/dev/nbd14", 00:08:50.546 "bdev_name": "Nvme3n1" 00:08:50.546 } 00:08:50.546 ]' 00:08:50.546 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:50.546 { 00:08:50.546 "nbd_device": "/dev/nbd0", 00:08:50.546 "bdev_name": "Nvme0n1" 00:08:50.546 }, 00:08:50.546 { 00:08:50.546 "nbd_device": "/dev/nbd1", 00:08:50.546 "bdev_name": "Nvme1n1p1" 00:08:50.546 }, 00:08:50.546 { 00:08:50.546 "nbd_device": "/dev/nbd10", 00:08:50.546 "bdev_name": "Nvme1n1p2" 00:08:50.546 }, 00:08:50.546 { 00:08:50.547 "nbd_device": "/dev/nbd11", 00:08:50.547 "bdev_name": "Nvme2n1" 00:08:50.547 }, 00:08:50.547 { 00:08:50.547 "nbd_device": "/dev/nbd12", 00:08:50.547 "bdev_name": "Nvme2n2" 00:08:50.547 }, 00:08:50.547 { 00:08:50.547 "nbd_device": "/dev/nbd13", 00:08:50.547 "bdev_name": "Nvme2n3" 00:08:50.547 }, 00:08:50.547 { 00:08:50.547 "nbd_device": "/dev/nbd14", 00:08:50.547 "bdev_name": "Nvme3n1" 00:08:50.547 } 00:08:50.547 ]' 00:08:50.547 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.547 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:50.547 /dev/nbd1 00:08:50.547 /dev/nbd10 00:08:50.547 /dev/nbd11 00:08:50.547 /dev/nbd12 00:08:50.547 /dev/nbd13 00:08:50.547 /dev/nbd14' 00:08:50.547 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:50.547 /dev/nbd1 00:08:50.547 /dev/nbd10 00:08:50.547 /dev/nbd11 00:08:50.547 /dev/nbd12 00:08:50.547 /dev/nbd13 00:08:50.547 /dev/nbd14' 00:08:50.547 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:50.806 256+0 records in 00:08:50.806 256+0 records out 00:08:50.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122964 s, 85.3 MB/s 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:50.806 256+0 records in 00:08:50.806 256+0 records out 00:08:50.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13237 s, 7.9 MB/s 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.806 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:51.065 256+0 records in 00:08:51.065 256+0 records out 00:08:51.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139888 s, 7.5 MB/s 00:08:51.065 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.065 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:51.065 256+0 records in 00:08:51.065 256+0 records out 00:08:51.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138313 s, 7.6 MB/s 00:08:51.065 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.065 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:51.324 256+0 records in 00:08:51.324 256+0 records out 00:08:51.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140625 s, 7.5 MB/s 00:08:51.324 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.324 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:51.324 256+0 records in 00:08:51.324 256+0 records out 00:08:51.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139708 s, 7.5 MB/s 00:08:51.324 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.324 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:51.583 256+0 records in 00:08:51.583 256+0 records out 00:08:51.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136674 s, 7.7 MB/s 00:08:51.583 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.583 15:08:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:51.583 256+0 records in 00:08:51.583 256+0 records out 00:08:51.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136071 s, 7.7 MB/s 00:08:51.583 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:51.583 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:51.583 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.583 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:51.583 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:51.583 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:51.583 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:51.583 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.583 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:51.842 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.843 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:51.843 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.843 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.102 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.102 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.102 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.102 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.102 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.102 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:52.102 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.102 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.102 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.102 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.361 15:08:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.930 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:53.205 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:53.205 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:53.205 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:53.205 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.205 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.205 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:53.205 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.205 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.205 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.205 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.463 15:08:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.722 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:53.722 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:53.722 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:53.979 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:53.979 malloc_lvol_verify 00:08:54.236 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:54.492 d071722d-26cd-4796-8018-34271e8b4b65 00:08:54.492 15:08:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:54.492 a730ae06-f5c2-4f6d-9552-f274218d6cab 00:08:54.749 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:54.749 /dev/nbd0 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:55.007 mke2fs 1.47.0 (5-Feb-2023) 00:08:55.007 Discarding device blocks: 0/4096 done 00:08:55.007 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:55.007 00:08:55.007 Allocating group tables: 0/1 done 00:08:55.007 Writing inode tables: 0/1 done 00:08:55.007 Creating journal (1024 blocks): done 00:08:55.007 Writing superblocks and filesystem accounting information: 0/1 done 00:08:55.007 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.007 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74736 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 74736 ']' 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 74736 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74736 00:08:55.265 killing process with pid 74736 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74736' 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 74736 00:08:55.265 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 74736 00:08:55.523 15:08:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:55.523 00:08:55.523 real 0m12.601s 00:08:55.523 user 0m17.033s 00:08:55.523 sys 0m5.671s 00:08:55.523 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.523 15:08:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:55.523 ************************************ 00:08:55.523 END TEST bdev_nbd 00:08:55.523 ************************************ 00:08:55.523 15:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:55.523 15:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:55.523 15:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:55.523 skipping fio tests on NVMe due to multi-ns failures. 00:08:55.523 15:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:55.523 15:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:55.523 15:08:53 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:55.523 15:08:53 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:55.523 15:08:53 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.523 15:08:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.523 ************************************ 00:08:55.523 START TEST bdev_verify 00:08:55.523 ************************************ 00:08:55.523 15:08:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:55.523 [2024-10-01 15:08:54.050124] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:08:55.523 [2024-10-01 15:08:54.050309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75163 ] 00:08:55.808 [2024-10-01 15:08:54.224921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.808 [2024-10-01 15:08:54.281714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.808 [2024-10-01 15:08:54.281840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.373 Running I/O for 5 seconds... 00:09:01.521 20736.00 IOPS, 81.00 MiB/s 20256.00 IOPS, 79.12 MiB/s 20010.67 IOPS, 78.17 MiB/s 20016.00 IOPS, 78.19 MiB/s 20339.20 IOPS, 79.45 MiB/s 00:09:01.521 Latency(us) 00:09:01.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.521 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x0 length 0xbd0bd 00:09:01.521 Nvme0n1 : 5.06 1441.37 5.63 0.00 0.00 88597.80 16423.48 82538.51 00:09:01.521 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:01.521 Nvme0n1 : 5.04 1420.83 5.55 0.00 0.00 89822.73 20529.35 78748.48 00:09:01.521 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x0 length 0x4ff80 00:09:01.521 Nvme1n1p1 : 5.06 1440.96 5.63 0.00 0.00 88456.75 16318.20 78327.36 00:09:01.521 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:01.521 Nvme1n1p1 : 5.05 1420.19 5.55 0.00 0.00 89738.99 22740.20 72431.76 00:09:01.521 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x0 length 0x4ff7f 00:09:01.521 Nvme1n1p2 : 5.06 1440.53 5.63 0.00 0.00 88296.11 16423.48 75800.67 00:09:01.521 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:01.521 Nvme1n1p2 : 5.05 1419.72 5.55 0.00 0.00 89528.25 22634.92 71168.41 00:09:01.521 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x0 length 0x80000 00:09:01.521 Nvme2n1 : 5.07 1439.83 5.62 0.00 0.00 88137.04 15897.09 72431.76 00:09:01.521 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x80000 length 0x80000 00:09:01.521 Nvme2n1 : 5.07 1427.83 5.58 0.00 0.00 88910.75 5000.74 74537.33 00:09:01.521 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x0 length 0x80000 00:09:01.521 Nvme2n2 : 5.07 1439.19 5.62 0.00 0.00 88013.11 15054.86 72852.87 00:09:01.521 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x80000 length 0x80000 00:09:01.521 Nvme2n2 : 5.07 1427.13 5.57 0.00 0.00 88796.53 6474.64 74958.44 00:09:01.521 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x0 length 0x80000 00:09:01.521 Nvme2n3 : 5.07 1438.81 5.62 0.00 0.00 87877.55 14528.46 74537.33 00:09:01.521 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x80000 length 0x80000 00:09:01.521 Nvme2n3 : 5.08 1436.26 5.61 0.00 0.00 88184.78 9159.25 74537.33 00:09:01.521 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x0 length 0x20000 00:09:01.521 Nvme3n1 : 5.08 1449.18 5.66 0.00 0.00 87166.69 2553.01 76642.90 00:09:01.521 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:01.521 Verification LBA range: start 0x20000 length 0x20000 00:09:01.521 Nvme3n1 : 5.08 1435.94 5.61 0.00 0.00 88045.55 9211.89 73695.10 00:09:01.521 =================================================================================================================== 00:09:01.521 Total : 20077.76 78.43 0.00 0.00 88535.28 2553.01 82538.51 00:09:01.798 00:09:01.798 real 0m6.363s 00:09:01.798 user 0m11.754s 00:09:01.798 sys 0m0.310s 00:09:01.798 15:09:00 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.798 15:09:00 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:01.798 ************************************ 00:09:01.798 END TEST bdev_verify 00:09:01.798 ************************************ 00:09:02.188 15:09:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:02.188 15:09:00 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:09:02.188 15:09:00 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.188 15:09:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:02.188 ************************************ 00:09:02.188 START TEST bdev_verify_big_io 00:09:02.188 ************************************ 00:09:02.188 15:09:00 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:02.188 [2024-10-01 15:09:00.492992] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:02.188 [2024-10-01 15:09:00.493201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75250 ] 00:09:02.188 [2024-10-01 15:09:00.663277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.188 [2024-10-01 15:09:00.722036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.188 [2024-10-01 15:09:00.722139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.754 Running I/O for 5 seconds... 00:09:08.830 144.00 IOPS, 9.00 MiB/s 1806.50 IOPS, 112.91 MiB/s 2610.33 IOPS, 163.15 MiB/s 00:09:08.830 Latency(us) 00:09:08.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.830 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x0 length 0xbd0b 00:09:08.830 Nvme0n1 : 5.68 115.43 7.21 0.00 0.00 1057771.68 34110.30 1340829.71 00:09:08.830 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:08.830 Nvme0n1 : 5.69 130.55 8.16 0.00 0.00 940341.83 23477.15 1003937.82 00:09:08.830 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x0 length 0x4ff8 00:09:08.830 Nvme1n1p1 : 5.69 135.06 8.44 0.00 0.00 899140.27 90118.58 852336.48 00:09:08.830 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:08.830 Nvme1n1p1 : 5.69 134.86 8.43 0.00 0.00 899293.80 85065.20 845598.64 00:09:08.830 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x0 length 0x4ff7 00:09:08.830 Nvme1n1p2 : 5.69 135.01 8.44 0.00 0.00 876757.85 128018.92 815278.37 00:09:08.830 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:08.830 Nvme1n1p2 : 5.70 134.81 8.43 0.00 0.00 876296.98 87170.78 774851.34 00:09:08.830 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x0 length 0x8000 00:09:08.830 Nvme2n1 : 5.76 137.25 8.58 0.00 0.00 840498.68 72431.76 828754.04 00:09:08.830 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x8000 length 0x8000 00:09:08.830 Nvme2n1 : 5.78 136.82 8.55 0.00 0.00 838397.36 82117.40 774851.34 00:09:08.830 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x0 length 0x8000 00:09:08.830 Nvme2n2 : 5.79 143.75 8.98 0.00 0.00 789174.24 22003.25 852336.48 00:09:08.830 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x8000 length 0x8000 00:09:08.830 Nvme2n2 : 5.88 133.59 8.35 0.00 0.00 837723.52 68220.61 1603605.38 00:09:08.830 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x0 length 0x8000 00:09:08.830 Nvme2n3 : 5.86 148.77 9.30 0.00 0.00 740235.89 27161.91 875918.91 00:09:08.830 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x8000 length 0x8000 00:09:08.830 Nvme2n3 : 5.90 143.28 8.95 0.00 0.00 768374.33 18634.33 1630556.74 00:09:08.830 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x0 length 0x2000 00:09:08.830 Nvme3n1 : 5.91 169.34 10.58 0.00 0.00 639077.15 3184.68 896132.42 00:09:08.830 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:08.830 Verification LBA range: start 0x2000 length 0x2000 00:09:08.830 Nvme3n1 : 5.91 151.84 9.49 0.00 0.00 707450.20 1789.74 1482324.31 00:09:08.830 =================================================================================================================== 00:09:08.830 Total : 1950.35 121.90 0.00 0.00 826884.31 1789.74 1630556.74 00:09:09.399 00:09:09.399 real 0m7.468s 00:09:09.399 user 0m13.930s 00:09:09.399 sys 0m0.348s 00:09:09.399 15:09:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.399 ************************************ 00:09:09.399 END TEST bdev_verify_big_io 00:09:09.399 ************************************ 00:09:09.399 15:09:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:09.399 15:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.399 15:09:07 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:09.399 15:09:07 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.399 15:09:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:09.399 ************************************ 00:09:09.399 START TEST bdev_write_zeroes 00:09:09.399 ************************************ 00:09:09.399 15:09:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.677 [2024-10-01 15:09:08.045165] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:09.677 [2024-10-01 15:09:08.045414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75354 ] 00:09:09.966 [2024-10-01 15:09:08.219370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.966 [2024-10-01 15:09:08.277571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.224 Running I/O for 1 seconds... 00:09:11.596 58688.00 IOPS, 229.25 MiB/s 00:09:11.596 Latency(us) 00:09:11.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.596 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.597 Nvme0n1 : 1.02 8372.88 32.71 0.00 0.00 15250.58 11528.02 29478.04 00:09:11.597 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.597 Nvme1n1p1 : 1.03 8364.03 32.67 0.00 0.00 15240.74 12054.41 29688.60 00:09:11.597 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.597 Nvme1n1p2 : 1.03 8355.29 32.64 0.00 0.00 15181.31 11896.49 25688.01 00:09:11.597 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.597 Nvme2n1 : 1.03 8347.35 32.61 0.00 0.00 15137.13 11949.13 24108.83 00:09:11.597 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.597 Nvme2n2 : 1.03 8339.39 32.58 0.00 0.00 15119.29 11580.66 23792.99 00:09:11.597 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.597 Nvme2n3 : 1.03 8331.55 32.55 0.00 0.00 15093.89 10843.71 24108.83 00:09:11.597 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:11.597 Nvme3n1 : 1.03 8323.70 32.51 0.00 0.00 15071.70 9475.08 25372.17 00:09:11.597 =================================================================================================================== 00:09:11.597 Total : 58434.20 228.26 0.00 0.00 15156.38 9475.08 29688.60 00:09:11.597 00:09:11.597 real 0m2.107s 00:09:11.597 user 0m1.728s 00:09:11.597 sys 0m0.266s 00:09:11.597 15:09:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.597 15:09:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:11.597 ************************************ 00:09:11.597 END TEST bdev_write_zeroes 00:09:11.597 ************************************ 00:09:11.597 15:09:10 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:11.597 15:09:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:11.597 15:09:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.597 15:09:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:11.597 ************************************ 00:09:11.597 START TEST bdev_json_nonenclosed 00:09:11.597 ************************************ 00:09:11.597 15:09:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:11.856 [2024-10-01 15:09:10.208178] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:11.856 [2024-10-01 15:09:10.208443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75396 ] 00:09:11.856 [2024-10-01 15:09:10.379310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.115 [2024-10-01 15:09:10.434343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.115 [2024-10-01 15:09:10.434487] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:12.115 [2024-10-01 15:09:10.434527] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:12.115 [2024-10-01 15:09:10.434550] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:12.115 00:09:12.115 real 0m0.447s 00:09:12.115 user 0m0.207s 00:09:12.115 sys 0m0.135s 00:09:12.115 15:09:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.115 15:09:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:12.115 ************************************ 00:09:12.115 END TEST bdev_json_nonenclosed 00:09:12.115 ************************************ 00:09:12.115 15:09:10 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:12.115 15:09:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:12.116 15:09:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.116 15:09:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:12.116 ************************************ 00:09:12.116 START TEST bdev_json_nonarray 00:09:12.116 ************************************ 00:09:12.116 15:09:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:12.375 [2024-10-01 15:09:10.727072] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:12.375 [2024-10-01 15:09:10.727272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75416 ] 00:09:12.375 [2024-10-01 15:09:10.899158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.636 [2024-10-01 15:09:10.954255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.636 [2024-10-01 15:09:10.954393] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:12.636 [2024-10-01 15:09:10.954424] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:12.636 [2024-10-01 15:09:10.954440] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:12.636 00:09:12.636 real 0m0.446s 00:09:12.636 user 0m0.199s 00:09:12.636 sys 0m0.141s 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:12.636 ************************************ 00:09:12.636 END TEST bdev_json_nonarray 00:09:12.636 ************************************ 00:09:12.636 15:09:11 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:12.636 15:09:11 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:12.636 15:09:11 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:12.636 15:09:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:12.636 15:09:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.636 15:09:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:12.636 ************************************ 00:09:12.636 START TEST bdev_gpt_uuid 00:09:12.636 ************************************ 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75447 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75447 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 75447 ']' 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.636 15:09:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:12.895 [2024-10-01 15:09:11.271606] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:12.895 [2024-10-01 15:09:11.271781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75447 ] 00:09:13.155 [2024-10-01 15:09:11.445244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.155 [2024-10-01 15:09:11.499635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.757 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.757 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:09:13.757 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:13.757 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.757 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 Some configs were skipped because the RPC state that can call them passed over. 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:14.016 { 00:09:14.016 "name": "Nvme1n1p1", 00:09:14.016 "aliases": [ 00:09:14.016 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:14.016 ], 00:09:14.016 "product_name": "GPT Disk", 00:09:14.016 "block_size": 4096, 00:09:14.016 "num_blocks": 655104, 00:09:14.016 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:14.016 "assigned_rate_limits": { 00:09:14.016 "rw_ios_per_sec": 0, 00:09:14.016 "rw_mbytes_per_sec": 0, 00:09:14.016 "r_mbytes_per_sec": 0, 00:09:14.016 "w_mbytes_per_sec": 0 00:09:14.016 }, 00:09:14.016 "claimed": false, 00:09:14.016 "zoned": false, 00:09:14.016 "supported_io_types": { 00:09:14.016 "read": true, 00:09:14.016 "write": true, 00:09:14.016 "unmap": true, 00:09:14.016 "flush": true, 00:09:14.016 "reset": true, 00:09:14.016 "nvme_admin": false, 00:09:14.016 "nvme_io": false, 00:09:14.016 "nvme_io_md": false, 00:09:14.016 "write_zeroes": true, 00:09:14.016 "zcopy": false, 00:09:14.016 "get_zone_info": false, 00:09:14.016 "zone_management": false, 00:09:14.016 "zone_append": false, 00:09:14.016 "compare": true, 00:09:14.016 "compare_and_write": false, 00:09:14.016 "abort": true, 00:09:14.016 "seek_hole": false, 00:09:14.016 "seek_data": false, 00:09:14.016 "copy": true, 00:09:14.016 "nvme_iov_md": false 00:09:14.016 }, 00:09:14.016 "driver_specific": { 00:09:14.016 "gpt": { 00:09:14.016 "base_bdev": "Nvme1n1", 00:09:14.016 "offset_blocks": 256, 00:09:14.016 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:14.016 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:14.016 "partition_name": "SPDK_TEST_first" 00:09:14.016 } 00:09:14.016 } 00:09:14.016 } 00:09:14.016 ]' 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:14.016 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:14.274 { 00:09:14.274 "name": "Nvme1n1p2", 00:09:14.274 "aliases": [ 00:09:14.274 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:14.274 ], 00:09:14.274 "product_name": "GPT Disk", 00:09:14.274 "block_size": 4096, 00:09:14.274 "num_blocks": 655103, 00:09:14.274 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:14.274 "assigned_rate_limits": { 00:09:14.274 "rw_ios_per_sec": 0, 00:09:14.274 "rw_mbytes_per_sec": 0, 00:09:14.274 "r_mbytes_per_sec": 0, 00:09:14.274 "w_mbytes_per_sec": 0 00:09:14.274 }, 00:09:14.274 "claimed": false, 00:09:14.274 "zoned": false, 00:09:14.274 "supported_io_types": { 00:09:14.274 "read": true, 00:09:14.274 "write": true, 00:09:14.274 "unmap": true, 00:09:14.274 "flush": true, 00:09:14.274 "reset": true, 00:09:14.274 "nvme_admin": false, 00:09:14.274 "nvme_io": false, 00:09:14.274 "nvme_io_md": false, 00:09:14.274 "write_zeroes": true, 00:09:14.274 "zcopy": false, 00:09:14.274 "get_zone_info": false, 00:09:14.274 "zone_management": false, 00:09:14.274 "zone_append": false, 00:09:14.274 "compare": true, 00:09:14.274 "compare_and_write": false, 00:09:14.274 "abort": true, 00:09:14.274 "seek_hole": false, 00:09:14.274 "seek_data": false, 00:09:14.274 "copy": true, 00:09:14.274 "nvme_iov_md": false 00:09:14.274 }, 00:09:14.274 "driver_specific": { 00:09:14.274 "gpt": { 00:09:14.274 "base_bdev": "Nvme1n1", 00:09:14.274 "offset_blocks": 655360, 00:09:14.274 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:14.274 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:14.274 "partition_name": "SPDK_TEST_second" 00:09:14.274 } 00:09:14.274 } 00:09:14.274 } 00:09:14.274 ]' 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 75447 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 75447 ']' 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 75447 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75447 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.274 killing process with pid 75447 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75447' 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 75447 00:09:14.274 15:09:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 75447 00:09:14.842 00:09:14.842 real 0m2.085s 00:09:14.842 user 0m2.232s 00:09:14.842 sys 0m0.496s 00:09:14.842 15:09:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.842 15:09:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:14.842 ************************************ 00:09:14.842 END TEST bdev_gpt_uuid 00:09:14.842 ************************************ 00:09:14.842 15:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:14.842 15:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:14.842 15:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:14.842 15:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:14.842 15:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:14.842 15:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:14.842 15:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:14.842 15:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:14.842 15:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:15.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:15.667 Waiting for block devices as requested 00:09:15.924 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.924 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.924 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:16.182 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.452 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:21.452 15:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:21.452 15:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:21.452 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:21.452 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:21.452 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:21.452 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:21.452 15:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:21.452 00:09:21.452 real 0m53.943s 00:09:21.452 user 1m5.965s 00:09:21.452 sys 0m12.512s 00:09:21.452 15:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.452 15:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:21.452 ************************************ 00:09:21.452 END TEST blockdev_nvme_gpt 00:09:21.452 ************************************ 00:09:21.452 15:09:19 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:21.452 15:09:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.452 15:09:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.452 15:09:19 -- common/autotest_common.sh@10 -- # set +x 00:09:21.710 ************************************ 00:09:21.710 START TEST nvme 00:09:21.710 ************************************ 00:09:21.710 15:09:19 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:21.710 * Looking for test storage... 00:09:21.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.710 15:09:20 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:21.710 15:09:20 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:09:21.710 15:09:20 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:21.710 15:09:20 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:21.710 15:09:20 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.710 15:09:20 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.710 15:09:20 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.710 15:09:20 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.710 15:09:20 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.710 15:09:20 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.710 15:09:20 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.710 15:09:20 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.710 15:09:20 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.710 15:09:20 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.710 15:09:20 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.710 15:09:20 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:21.710 15:09:20 nvme -- scripts/common.sh@345 -- # : 1 00:09:21.710 15:09:20 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.710 15:09:20 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.710 15:09:20 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:21.710 15:09:20 nvme -- scripts/common.sh@353 -- # local d=1 00:09:21.710 15:09:20 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.710 15:09:20 nvme -- scripts/common.sh@355 -- # echo 1 00:09:21.710 15:09:20 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.710 15:09:20 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:21.710 15:09:20 nvme -- scripts/common.sh@353 -- # local d=2 00:09:21.710 15:09:20 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.710 15:09:20 nvme -- scripts/common.sh@355 -- # echo 2 00:09:21.710 15:09:20 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.710 15:09:20 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.711 15:09:20 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.711 15:09:20 nvme -- scripts/common.sh@368 -- # return 0 00:09:21.711 15:09:20 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.711 15:09:20 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:21.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.711 --rc genhtml_branch_coverage=1 00:09:21.711 --rc genhtml_function_coverage=1 00:09:21.711 --rc genhtml_legend=1 00:09:21.711 --rc geninfo_all_blocks=1 00:09:21.711 --rc geninfo_unexecuted_blocks=1 00:09:21.711 00:09:21.711 ' 00:09:21.711 15:09:20 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:21.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.711 --rc genhtml_branch_coverage=1 00:09:21.711 --rc genhtml_function_coverage=1 00:09:21.711 --rc genhtml_legend=1 00:09:21.711 --rc geninfo_all_blocks=1 00:09:21.711 --rc geninfo_unexecuted_blocks=1 00:09:21.711 00:09:21.711 ' 00:09:21.711 15:09:20 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:21.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.711 --rc genhtml_branch_coverage=1 00:09:21.711 --rc genhtml_function_coverage=1 00:09:21.711 --rc genhtml_legend=1 00:09:21.711 --rc geninfo_all_blocks=1 00:09:21.711 --rc geninfo_unexecuted_blocks=1 00:09:21.711 00:09:21.711 ' 00:09:21.711 15:09:20 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:21.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.711 --rc genhtml_branch_coverage=1 00:09:21.711 --rc genhtml_function_coverage=1 00:09:21.711 --rc genhtml_legend=1 00:09:21.711 --rc geninfo_all_blocks=1 00:09:21.711 --rc geninfo_unexecuted_blocks=1 00:09:21.711 00:09:21.711 ' 00:09:21.711 15:09:20 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:22.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:23.218 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:23.218 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:23.218 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:23.478 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:23.478 15:09:21 nvme -- nvme/nvme.sh@79 -- # uname 00:09:23.478 15:09:21 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:23.478 15:09:21 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:23.478 15:09:21 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:23.478 15:09:21 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:23.478 15:09:21 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:09:23.478 15:09:21 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:09:23.478 15:09:21 nvme -- common/autotest_common.sh@1071 -- # stubpid=76082 00:09:23.478 Waiting for stub to ready for secondary processes... 00:09:23.478 15:09:21 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:23.478 15:09:21 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:09:23.478 15:09:21 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:23.478 15:09:21 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/76082 ]] 00:09:23.478 15:09:21 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:09:23.478 [2024-10-01 15:09:21.935116] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:09:23.478 [2024-10-01 15:09:21.935348] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:24.414 15:09:22 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:24.414 15:09:22 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/76082 ]] 00:09:24.414 15:09:22 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:09:24.414 [2024-10-01 15:09:22.959265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.674 [2024-10-01 15:09:22.997962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.674 [2024-10-01 15:09:22.998011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.674 [2024-10-01 15:09:22.998141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.674 [2024-10-01 15:09:23.012883] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:24.674 [2024-10-01 15:09:23.012942] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:24.674 [2024-10-01 15:09:23.026571] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:24.674 [2024-10-01 15:09:23.027350] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:24.674 [2024-10-01 15:09:23.028201] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:24.674 [2024-10-01 15:09:23.028446] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:24.674 [2024-10-01 15:09:23.028533] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:24.674 [2024-10-01 15:09:23.029187] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:24.674 [2024-10-01 15:09:23.029406] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:24.674 [2024-10-01 15:09:23.029471] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:24.674 [2024-10-01 15:09:23.030461] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:24.674 [2024-10-01 15:09:23.030656] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:24.674 [2024-10-01 15:09:23.030735] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:24.674 [2024-10-01 15:09:23.031095] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:24.674 [2024-10-01 15:09:23.031208] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:25.612 done. 00:09:25.612 15:09:23 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:25.612 15:09:23 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:09:25.612 15:09:23 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:25.612 15:09:23 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:09:25.612 15:09:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.612 15:09:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.612 ************************************ 00:09:25.612 START TEST nvme_reset 00:09:25.612 ************************************ 00:09:25.612 15:09:23 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:25.871 Initializing NVMe Controllers 00:09:25.871 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:25.871 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:25.871 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:25.871 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:25.871 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:25.871 00:09:25.871 real 0m0.292s 00:09:25.871 user 0m0.094s 00:09:25.871 sys 0m0.152s 00:09:25.871 15:09:24 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.871 15:09:24 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:25.871 ************************************ 00:09:25.871 END TEST nvme_reset 00:09:25.871 ************************************ 00:09:25.871 15:09:24 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:25.871 15:09:24 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:25.871 15:09:24 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.871 15:09:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.871 ************************************ 00:09:25.871 START TEST nvme_identify 00:09:25.871 ************************************ 00:09:25.871 15:09:24 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:09:25.871 15:09:24 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:25.871 15:09:24 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:25.871 15:09:24 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:25.871 15:09:24 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:25.871 15:09:24 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:25.871 15:09:24 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:09:25.871 15:09:24 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:25.871 15:09:24 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:25.871 15:09:24 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:25.871 15:09:24 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:25.871 15:09:24 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:25.871 15:09:24 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:26.132 [2024-10-01 15:09:24.608625] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 76115 terminated unexpected 00:09:26.132 ===================================================== 00:09:26.132 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:26.132 ===================================================== 00:09:26.132 Controller Capabilities/Features 00:09:26.132 ================================ 00:09:26.132 Vendor ID: 1b36 00:09:26.132 Subsystem Vendor ID: 1af4 00:09:26.132 Serial Number: 12340 00:09:26.132 Model Number: QEMU NVMe Ctrl 00:09:26.132 Firmware Version: 8.0.0 00:09:26.132 Recommended Arb Burst: 6 00:09:26.132 IEEE OUI Identifier: 00 54 52 00:09:26.132 Multi-path I/O 00:09:26.132 May have multiple subsystem ports: No 00:09:26.132 May have multiple controllers: No 00:09:26.132 Associated with SR-IOV VF: No 00:09:26.132 Max Data Transfer Size: 524288 00:09:26.132 Max Number of Namespaces: 256 00:09:26.132 Max Number of I/O Queues: 64 00:09:26.132 NVMe Specification Version (VS): 1.4 00:09:26.132 NVMe Specification Version (Identify): 1.4 00:09:26.132 Maximum Queue Entries: 2048 00:09:26.132 Contiguous Queues Required: Yes 00:09:26.132 Arbitration Mechanisms Supported 00:09:26.132 Weighted Round Robin: Not Supported 00:09:26.132 Vendor Specific: Not Supported 00:09:26.132 Reset Timeout: 7500 ms 00:09:26.132 Doorbell Stride: 4 bytes 00:09:26.132 NVM Subsystem Reset: Not Supported 00:09:26.132 Command Sets Supported 00:09:26.132 NVM Command Set: Supported 00:09:26.132 Boot Partition: Not Supported 00:09:26.132 Memory Page Size Minimum: 4096 bytes 00:09:26.132 Memory Page Size Maximum: 65536 bytes 00:09:26.132 Persistent Memory Region: Not Supported 00:09:26.132 Optional Asynchronous Events Supported 00:09:26.132 Namespace Attribute Notices: Supported 00:09:26.132 Firmware Activation Notices: Not Supported 00:09:26.132 ANA Change Notices: Not Supported 00:09:26.132 PLE Aggregate Log Change Notices: Not Supported 00:09:26.132 LBA Status Info Alert Notices: Not Supported 00:09:26.132 EGE Aggregate Log Change Notices: Not Supported 00:09:26.132 Normal NVM Subsystem Shutdown event: Not Supported 00:09:26.132 Zone Descriptor Change Notices: Not Supported 00:09:26.132 Discovery Log Change Notices: Not Supported 00:09:26.132 Controller Attributes 00:09:26.132 128-bit Host Identifier: Not Supported 00:09:26.132 Non-Operational Permissive Mode: Not Supported 00:09:26.132 NVM Sets: Not Supported 00:09:26.132 Read Recovery Levels: Not Supported 00:09:26.132 Endurance Groups: Not Supported 00:09:26.132 Predictable Latency Mode: Not Supported 00:09:26.133 Traffic Based Keep ALive: Not Supported 00:09:26.133 Namespace Granularity: Not Supported 00:09:26.133 SQ Associations: Not Supported 00:09:26.133 UUID List: Not Supported 00:09:26.133 Multi-Domain Subsystem: Not Supported 00:09:26.133 Fixed Capacity Management: Not Supported 00:09:26.133 Variable Capacity Management: Not Supported 00:09:26.133 Delete Endurance Group: Not Supported 00:09:26.133 Delete NVM Set: Not Supported 00:09:26.133 Extended LBA Formats Supported: Supported 00:09:26.133 Flexible Data Placement Supported: Not Supported 00:09:26.133 00:09:26.133 Controller Memory Buffer Support 00:09:26.133 ================================ 00:09:26.133 Supported: No 00:09:26.133 00:09:26.133 Persistent Memory Region Support 00:09:26.133 ================================ 00:09:26.133 Supported: No 00:09:26.133 00:09:26.133 Admin Command Set Attributes 00:09:26.133 ============================ 00:09:26.133 Security Send/Receive: Not Supported 00:09:26.133 Format NVM: Supported 00:09:26.133 Firmware Activate/Download: Not Supported 00:09:26.133 Namespace Management: Supported 00:09:26.133 Device Self-Test: Not Supported 00:09:26.133 Directives: Supported 00:09:26.133 NVMe-MI: Not Supported 00:09:26.133 Virtualization Management: Not Supported 00:09:26.133 Doorbell Buffer Config: Supported 00:09:26.133 Get LBA Status Capability: Not Supported 00:09:26.133 Command & Feature Lockdown Capability: Not Supported 00:09:26.133 Abort Command Limit: 4 00:09:26.133 Async Event Request Limit: 4 00:09:26.133 Number of Firmware Slots: N/A 00:09:26.133 Firmware Slot 1 Read-Only: N/A 00:09:26.133 Firmware Activation Without Reset: N/A 00:09:26.133 Multiple Update Detection Support: N/A 00:09:26.133 Firmware Update Granularity: No Information Provided 00:09:26.133 Per-Namespace SMART Log: Yes 00:09:26.133 Asymmetric Namespace Access Log Page: Not Supported 00:09:26.133 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:26.133 Command Effects Log Page: Supported 00:09:26.133 Get Log Page Extended Data: Supported 00:09:26.133 Telemetry Log Pages: Not Supported 00:09:26.133 Persistent Event Log Pages: Not Supported 00:09:26.133 Supported Log Pages Log Page: May Support 00:09:26.133 Commands Supported & Effects Log Page: Not Supported 00:09:26.133 Feature Identifiers & Effects Log Page:May Support 00:09:26.133 NVMe-MI Commands & Effects Log Page: May Support 00:09:26.133 Data Area 4 for Telemetry Log: Not Supported 00:09:26.133 Error Log Page Entries Supported: 1 00:09:26.133 Keep Alive: Not Supported 00:09:26.133 00:09:26.133 NVM Command Set Attributes 00:09:26.133 ========================== 00:09:26.133 Submission Queue Entry Size 00:09:26.133 Max: 64 00:09:26.133 Min: 64 00:09:26.133 Completion Queue Entry Size 00:09:26.133 Max: 16 00:09:26.133 Min: 16 00:09:26.133 Number of Namespaces: 256 00:09:26.133 Compare Command: Supported 00:09:26.133 Write Uncorrectable Command: Not Supported 00:09:26.133 Dataset Management Command: Supported 00:09:26.133 Write Zeroes Command: Supported 00:09:26.133 Set Features Save Field: Supported 00:09:26.133 Reservations: Not Supported 00:09:26.133 Timestamp: Supported 00:09:26.133 Copy: Supported 00:09:26.133 Volatile Write Cache: Present 00:09:26.133 Atomic Write Unit (Normal): 1 00:09:26.133 Atomic Write Unit (PFail): 1 00:09:26.133 Atomic Compare & Write Unit: 1 00:09:26.133 Fused Compare & Write: Not Supported 00:09:26.133 Scatter-Gather List 00:09:26.133 SGL Command Set: Supported 00:09:26.133 SGL Keyed: Not Supported 00:09:26.133 SGL Bit Bucket Descriptor: Not Supported 00:09:26.133 SGL Metadata Pointer: Not Supported 00:09:26.133 Oversized SGL: Not Supported 00:09:26.133 SGL Metadata Address: Not Supported 00:09:26.133 SGL Offset: Not Supported 00:09:26.133 Transport SGL Data Block: Not Supported 00:09:26.133 Replay Protected Memory Block: Not Supported 00:09:26.133 00:09:26.133 Firmware Slot Information 00:09:26.133 ========================= 00:09:26.133 Active slot: 1 00:09:26.133 Slot 1 Firmware Revision: 1.0 00:09:26.133 00:09:26.133 00:09:26.133 Commands Supported and Effects 00:09:26.133 ============================== 00:09:26.133 Admin Commands 00:09:26.133 -------------- 00:09:26.133 Delete I/O Submission Queue (00h): Supported 00:09:26.133 Create I/O Submission Queue (01h): Supported 00:09:26.133 Get Log Page (02h): Supported 00:09:26.133 Delete I/O Completion Queue (04h): Supported 00:09:26.133 Create I/O Completion Queue (05h): Supported 00:09:26.133 Identify (06h): Supported 00:09:26.133 Abort (08h): Supported 00:09:26.133 Set Features (09h): Supported 00:09:26.133 Get Features (0Ah): Supported 00:09:26.133 Asynchronous Event Request (0Ch): Supported 00:09:26.133 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:26.133 Directive Send (19h): Supported 00:09:26.133 Directive Receive (1Ah): Supported 00:09:26.133 Virtualization Management (1Ch): Supported 00:09:26.133 Doorbell Buffer Config (7Ch): Supported 00:09:26.133 Format NVM (80h): Supported LBA-Change 00:09:26.133 I/O Commands 00:09:26.133 ------------ 00:09:26.133 Flush (00h): Supported LBA-Change 00:09:26.133 Write (01h): Supported LBA-Change 00:09:26.133 Read (02h): Supported 00:09:26.133 Compare (05h): Supported 00:09:26.133 Write Zeroes (08h): Supported LBA-Change 00:09:26.133 Dataset Management (09h): Supported LBA-Change 00:09:26.133 Unknown (0Ch): Supported 00:09:26.133 Unknown (12h): Supported 00:09:26.133 Copy (19h): Supported LBA-Change 00:09:26.133 Unknown (1Dh): Supported LBA-Change 00:09:26.133 00:09:26.133 Error Log 00:09:26.133 ========= 00:09:26.133 00:09:26.133 Arbitration 00:09:26.133 =========== 00:09:26.133 Arbitration Burst: no limit 00:09:26.133 00:09:26.133 Power Management 00:09:26.133 ================ 00:09:26.133 Number of Power States: 1 00:09:26.133 Current Power State: Power State #0 00:09:26.133 Power State #0: 00:09:26.133 Max Power: 25.00 W 00:09:26.133 Non-Operational State: Operational 00:09:26.133 Entry Latency: 16 microseconds 00:09:26.133 Exit Latency: 4 microseconds 00:09:26.133 Relative Read Throughput: 0 00:09:26.133 Relative Read Latency: 0 00:09:26.133 Relative Write Throughput: 0 00:09:26.133 Relative Write Latency: 0 00:09:26.133 Idle Power[2024-10-01 15:09:24.610112] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 76115 terminated unexpected 00:09:26.133 : Not Reported 00:09:26.133 Active Power: Not Reported 00:09:26.133 Non-Operational Permissive Mode: Not Supported 00:09:26.133 00:09:26.133 Health Information 00:09:26.133 ================== 00:09:26.133 Critical Warnings: 00:09:26.133 Available Spare Space: OK 00:09:26.133 Temperature: OK 00:09:26.133 Device Reliability: OK 00:09:26.133 Read Only: No 00:09:26.133 Volatile Memory Backup: OK 00:09:26.133 Current Temperature: 323 Kelvin (50 Celsius) 00:09:26.133 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:26.133 Available Spare: 0% 00:09:26.133 Available Spare Threshold: 0% 00:09:26.133 Life Percentage Used: 0% 00:09:26.133 Data Units Read: 738 00:09:26.133 Data Units Written: 666 00:09:26.133 Host Read Commands: 34347 00:09:26.133 Host Write Commands: 34133 00:09:26.133 Controller Busy Time: 0 minutes 00:09:26.133 Power Cycles: 0 00:09:26.133 Power On Hours: 0 hours 00:09:26.133 Unsafe Shutdowns: 0 00:09:26.133 Unrecoverable Media Errors: 0 00:09:26.133 Lifetime Error Log Entries: 0 00:09:26.133 Warning Temperature Time: 0 minutes 00:09:26.133 Critical Temperature Time: 0 minutes 00:09:26.133 00:09:26.133 Number of Queues 00:09:26.133 ================ 00:09:26.133 Number of I/O Submission Queues: 64 00:09:26.133 Number of I/O Completion Queues: 64 00:09:26.133 00:09:26.133 ZNS Specific Controller Data 00:09:26.133 ============================ 00:09:26.133 Zone Append Size Limit: 0 00:09:26.133 00:09:26.133 00:09:26.133 Active Namespaces 00:09:26.133 ================= 00:09:26.133 Namespace ID:1 00:09:26.133 Error Recovery Timeout: Unlimited 00:09:26.133 Command Set Identifier: NVM (00h) 00:09:26.133 Deallocate: Supported 00:09:26.133 Deallocated/Unwritten Error: Supported 00:09:26.133 Deallocated Read Value: All 0x00 00:09:26.133 Deallocate in Write Zeroes: Not Supported 00:09:26.133 Deallocated Guard Field: 0xFFFF 00:09:26.133 Flush: Supported 00:09:26.133 Reservation: Not Supported 00:09:26.133 Metadata Transferred as: Separate Metadata Buffer 00:09:26.133 Namespace Sharing Capabilities: Private 00:09:26.133 Size (in LBAs): 1548666 (5GiB) 00:09:26.133 Capacity (in LBAs): 1548666 (5GiB) 00:09:26.133 Utilization (in LBAs): 1548666 (5GiB) 00:09:26.133 Thin Provisioning: Not Supported 00:09:26.133 Per-NS Atomic Units: No 00:09:26.133 Maximum Single Source Range Length: 128 00:09:26.133 Maximum Copy Length: 128 00:09:26.133 Maximum Source Range Count: 128 00:09:26.134 NGUID/EUI64 Never Reused: No 00:09:26.134 Namespace Write Protected: No 00:09:26.134 Number of LBA Formats: 8 00:09:26.134 Current LBA Format: LBA Format #07 00:09:26.134 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:26.134 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:26.134 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:26.134 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:26.134 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:26.134 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:26.134 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:26.134 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:26.134 00:09:26.134 NVM Specific Namespace Data 00:09:26.134 =========================== 00:09:26.134 Logical Block Storage Tag Mask: 0 00:09:26.134 Protection Information Capabilities: 00:09:26.134 16b Guard Protection Information Storage Tag Support: No 00:09:26.134 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:26.134 Storage Tag Check Read Support: No 00:09:26.134 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.134 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.134 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.134 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.134 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.134 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.134 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.134 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.134 ===================================================== 00:09:26.134 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:26.134 ===================================================== 00:09:26.134 Controller Capabilities/Features 00:09:26.134 ================================ 00:09:26.134 Vendor ID: 1b36 00:09:26.134 Subsystem Vendor ID: 1af4 00:09:26.134 Serial Number: 12341 00:09:26.134 Model Number: QEMU NVMe Ctrl 00:09:26.134 Firmware Version: 8.0.0 00:09:26.134 Recommended Arb Burst: 6 00:09:26.134 IEEE OUI Identifier: 00 54 52 00:09:26.134 Multi-path I/O 00:09:26.134 May have multiple subsystem ports: No 00:09:26.134 May have multiple controllers: No 00:09:26.134 Associated with SR-IOV VF: No 00:09:26.134 Max Data Transfer Size: 524288 00:09:26.134 Max Number of Namespaces: 256 00:09:26.134 Max Number of I/O Queues: 64 00:09:26.134 NVMe Specification Version (VS): 1.4 00:09:26.134 NVMe Specification Version (Identify): 1.4 00:09:26.134 Maximum Queue Entries: 2048 00:09:26.134 Contiguous Queues Required: Yes 00:09:26.134 Arbitration Mechanisms Supported 00:09:26.134 Weighted Round Robin: Not Supported 00:09:26.134 Vendor Specific: Not Supported 00:09:26.134 Reset Timeout: 7500 ms 00:09:26.134 Doorbell Stride: 4 bytes 00:09:26.134 NVM Subsystem Reset: Not Supported 00:09:26.134 Command Sets Supported 00:09:26.134 NVM Command Set: Supported 00:09:26.134 Boot Partition: Not Supported 00:09:26.134 Memory Page Size Minimum: 4096 bytes 00:09:26.134 Memory Page Size Maximum: 65536 bytes 00:09:26.134 Persistent Memory Region: Not Supported 00:09:26.134 Optional Asynchronous Events Supported 00:09:26.134 Namespace Attribute Notices: Supported 00:09:26.134 Firmware Activation Notices: Not Supported 00:09:26.134 ANA Change Notices: Not Supported 00:09:26.134 PLE Aggregate Log Change Notices: Not Supported 00:09:26.134 LBA Status Info Alert Notices: Not Supported 00:09:26.134 EGE Aggregate Log Change Notices: Not Supported 00:09:26.134 Normal NVM Subsystem Shutdown event: Not Supported 00:09:26.134 Zone Descriptor Change Notices: Not Supported 00:09:26.134 Discovery Log Change Notices: Not Supported 00:09:26.134 Controller Attributes 00:09:26.134 128-bit Host Identifier: Not Supported 00:09:26.134 Non-Operational Permissive Mode: Not Supported 00:09:26.134 NVM Sets: Not Supported 00:09:26.134 Read Recovery Levels: Not Supported 00:09:26.134 Endurance Groups: Not Supported 00:09:26.134 Predictable Latency Mode: Not Supported 00:09:26.134 Traffic Based Keep ALive: Not Supported 00:09:26.134 Namespace Granularity: Not Supported 00:09:26.134 SQ Associations: Not Supported 00:09:26.134 UUID List: Not Supported 00:09:26.134 Multi-Domain Subsystem: Not Supported 00:09:26.134 Fixed Capacity Management: Not Supported 00:09:26.134 Variable Capacity Management: Not Supported 00:09:26.134 Delete Endurance Group: Not Supported 00:09:26.134 Delete NVM Set: Not Supported 00:09:26.134 Extended LBA Formats Supported: Supported 00:09:26.134 Flexible Data Placement Supported: Not Supported 00:09:26.134 00:09:26.134 Controller Memory Buffer Support 00:09:26.134 ================================ 00:09:26.134 Supported: No 00:09:26.134 00:09:26.134 Persistent Memory Region Support 00:09:26.134 ================================ 00:09:26.134 Supported: No 00:09:26.134 00:09:26.134 Admin Command Set Attributes 00:09:26.134 ============================ 00:09:26.134 Security Send/Receive: Not Supported 00:09:26.134 Format NVM: Supported 00:09:26.134 Firmware Activate/Download: Not Supported 00:09:26.134 Namespace Management: Supported 00:09:26.134 Device Self-Test: Not Supported 00:09:26.134 Directives: Supported 00:09:26.134 NVMe-MI: Not Supported 00:09:26.134 Virtualization Management: Not Supported 00:09:26.134 Doorbell Buffer Config: Supported 00:09:26.134 Get LBA Status Capability: Not Supported 00:09:26.134 Command & Feature Lockdown Capability: Not Supported 00:09:26.134 Abort Command Limit: 4 00:09:26.134 Async Event Request Limit: 4 00:09:26.134 Number of Firmware Slots: N/A 00:09:26.134 Firmware Slot 1 Read-Only: N/A 00:09:26.134 Firmware Activation Without Reset: N/A 00:09:26.134 Multiple Update Detection Support: N/A 00:09:26.134 Firmware Update Granularity: No Information Provided 00:09:26.134 Per-Namespace SMART Log: Yes 00:09:26.134 Asymmetric Namespace Access Log Page: Not Supported 00:09:26.134 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:26.134 Command Effects Log Page: Supported 00:09:26.134 Get Log Page Extended Data: Supported 00:09:26.134 Telemetry Log Pages: Not Supported 00:09:26.134 Persistent Event Log Pages: Not Supported 00:09:26.134 Supported Log Pages Log Page: May Support 00:09:26.134 Commands Supported & Effects Log Page: Not Supported 00:09:26.134 Feature Identifiers & Effects Log Page:May Support 00:09:26.134 NVMe-MI Commands & Effects Log Page: May Support 00:09:26.134 Data Area 4 for Telemetry Log: Not Supported 00:09:26.134 Error Log Page Entries Supported: 1 00:09:26.134 Keep Alive: Not Supported 00:09:26.134 00:09:26.134 NVM Command Set Attributes 00:09:26.134 ========================== 00:09:26.134 Submission Queue Entry Size 00:09:26.134 Max: 64 00:09:26.134 Min: 64 00:09:26.134 Completion Queue Entry Size 00:09:26.134 Max: 16 00:09:26.134 Min: 16 00:09:26.134 Number of Namespaces: 256 00:09:26.134 Compare Command: Supported 00:09:26.134 Write Uncorrectable Command: Not Supported 00:09:26.134 Dataset Management Command: Supported 00:09:26.134 Write Zeroes Command: Supported 00:09:26.134 Set Features Save Field: Supported 00:09:26.134 Reservations: Not Supported 00:09:26.134 Timestamp: Supported 00:09:26.134 Copy: Supported 00:09:26.134 Volatile Write Cache: Present 00:09:26.134 Atomic Write Unit (Normal): 1 00:09:26.134 Atomic Write Unit (PFail): 1 00:09:26.134 Atomic Compare & Write Unit: 1 00:09:26.134 Fused Compare & Write: Not Supported 00:09:26.134 Scatter-Gather List 00:09:26.134 SGL Command Set: Supported 00:09:26.134 SGL Keyed: Not Supported 00:09:26.134 SGL Bit Bucket Descriptor: Not Supported 00:09:26.134 SGL Metadata Pointer: Not Supported 00:09:26.134 Oversized SGL: Not Supported 00:09:26.134 SGL Metadata Address: Not Supported 00:09:26.134 SGL Offset: Not Supported 00:09:26.134 Transport SGL Data Block: Not Supported 00:09:26.134 Replay Protected Memory Block: Not Supported 00:09:26.134 00:09:26.134 Firmware Slot Information 00:09:26.134 ========================= 00:09:26.134 Active slot: 1 00:09:26.134 Slot 1 Firmware Revision: 1.0 00:09:26.134 00:09:26.134 00:09:26.134 Commands Supported and Effects 00:09:26.134 ============================== 00:09:26.134 Admin Commands 00:09:26.134 -------------- 00:09:26.134 Delete I/O Submission Queue (00h): Supported 00:09:26.134 Create I/O Submission Queue (01h): Supported 00:09:26.134 Get Log Page (02h): Supported 00:09:26.134 Delete I/O Completion Queue (04h): Supported 00:09:26.134 Create I/O Completion Queue (05h): Supported 00:09:26.134 Identify (06h): Supported 00:09:26.134 Abort (08h): Supported 00:09:26.134 Set Features (09h): Supported 00:09:26.134 Get Features (0Ah): Supported 00:09:26.134 Asynchronous Event Request (0Ch): Supported 00:09:26.134 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:26.134 Directive Send (19h): Supported 00:09:26.134 Directive Receive (1Ah): Supported 00:09:26.134 Virtualization Management (1Ch): Supported 00:09:26.134 Doorbell Buffer Config (7Ch): Supported 00:09:26.134 Format NVM (80h): Supported LBA-Change 00:09:26.134 I/O Commands 00:09:26.134 ------------ 00:09:26.134 Flush (00h): Supported LBA-Change 00:09:26.135 Write (01h): Supported LBA-Change 00:09:26.135 Read (02h): Supported 00:09:26.135 Compare (05h): Supported 00:09:26.135 Write Zeroes (08h): Supported LBA-Change 00:09:26.135 Dataset Management (09h): Supported LBA-Change 00:09:26.135 Unknown (0Ch): Supported 00:09:26.135 Unknown (12h): Supported 00:09:26.135 Copy (19h): Supported LBA-Change 00:09:26.135 Unknown (1Dh): Supported LBA-Change 00:09:26.135 00:09:26.135 Error Log 00:09:26.135 ========= 00:09:26.135 00:09:26.135 Arbitration 00:09:26.135 =========== 00:09:26.135 Arbitration Burst: no limit 00:09:26.135 00:09:26.135 Power Management 00:09:26.135 ================ 00:09:26.135 Number of Power States: 1 00:09:26.135 Current Power State: Power State #0 00:09:26.135 Power State #0: 00:09:26.135 Max Power: 25.00 W 00:09:26.135 Non-Operational State: Operational 00:09:26.135 Entry Latency: 16 microseconds 00:09:26.135 Exit Latency: 4 microseconds 00:09:26.135 Relative Read Throughput: 0 00:09:26.135 Relative Read Latency: 0 00:09:26.135 Relative Write Throughput: 0 00:09:26.135 Relative Write Latency: 0 00:09:26.135 Idle Power: Not Reported 00:09:26.135 Active Power: Not Reported 00:09:26.135 Non-Operational Permissive Mode: Not Supported 00:09:26.135 00:09:26.135 Health Information 00:09:26.135 ================== 00:09:26.135 Critical Warnings: 00:09:26.135 Available Spare Space: OK 00:09:26.135 Temperature: [2024-10-01 15:09:24.610926] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 76115 terminated unexpected 00:09:26.135 OK 00:09:26.135 Device Reliability: OK 00:09:26.135 Read Only: No 00:09:26.135 Volatile Memory Backup: OK 00:09:26.135 Current Temperature: 323 Kelvin (50 Celsius) 00:09:26.135 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:26.135 Available Spare: 0% 00:09:26.135 Available Spare Threshold: 0% 00:09:26.135 Life Percentage Used: 0% 00:09:26.135 Data Units Read: 1143 00:09:26.135 Data Units Written: 1008 00:09:26.135 Host Read Commands: 51635 00:09:26.135 Host Write Commands: 50367 00:09:26.135 Controller Busy Time: 0 minutes 00:09:26.135 Power Cycles: 0 00:09:26.135 Power On Hours: 0 hours 00:09:26.135 Unsafe Shutdowns: 0 00:09:26.135 Unrecoverable Media Errors: 0 00:09:26.135 Lifetime Error Log Entries: 0 00:09:26.135 Warning Temperature Time: 0 minutes 00:09:26.135 Critical Temperature Time: 0 minutes 00:09:26.135 00:09:26.135 Number of Queues 00:09:26.135 ================ 00:09:26.135 Number of I/O Submission Queues: 64 00:09:26.135 Number of I/O Completion Queues: 64 00:09:26.135 00:09:26.135 ZNS Specific Controller Data 00:09:26.135 ============================ 00:09:26.135 Zone Append Size Limit: 0 00:09:26.135 00:09:26.135 00:09:26.135 Active Namespaces 00:09:26.135 ================= 00:09:26.135 Namespace ID:1 00:09:26.135 Error Recovery Timeout: Unlimited 00:09:26.135 Command Set Identifier: NVM (00h) 00:09:26.135 Deallocate: Supported 00:09:26.135 Deallocated/Unwritten Error: Supported 00:09:26.135 Deallocated Read Value: All 0x00 00:09:26.135 Deallocate in Write Zeroes: Not Supported 00:09:26.135 Deallocated Guard Field: 0xFFFF 00:09:26.135 Flush: Supported 00:09:26.135 Reservation: Not Supported 00:09:26.135 Namespace Sharing Capabilities: Private 00:09:26.135 Size (in LBAs): 1310720 (5GiB) 00:09:26.135 Capacity (in LBAs): 1310720 (5GiB) 00:09:26.135 Utilization (in LBAs): 1310720 (5GiB) 00:09:26.135 Thin Provisioning: Not Supported 00:09:26.135 Per-NS Atomic Units: No 00:09:26.135 Maximum Single Source Range Length: 128 00:09:26.135 Maximum Copy Length: 128 00:09:26.135 Maximum Source Range Count: 128 00:09:26.135 NGUID/EUI64 Never Reused: No 00:09:26.135 Namespace Write Protected: No 00:09:26.135 Number of LBA Formats: 8 00:09:26.135 Current LBA Format: LBA Format #04 00:09:26.135 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:26.135 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:26.135 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:26.135 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:26.135 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:26.135 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:26.135 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:26.135 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:26.135 00:09:26.135 NVM Specific Namespace Data 00:09:26.135 =========================== 00:09:26.135 Logical Block Storage Tag Mask: 0 00:09:26.135 Protection Information Capabilities: 00:09:26.135 16b Guard Protection Information Storage Tag Support: No 00:09:26.135 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:26.135 Storage Tag Check Read Support: No 00:09:26.135 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.135 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.135 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.135 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.135 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.135 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.135 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.135 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.135 ===================================================== 00:09:26.135 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:26.135 ===================================================== 00:09:26.135 Controller Capabilities/Features 00:09:26.135 ================================ 00:09:26.135 Vendor ID: 1b36 00:09:26.135 Subsystem Vendor ID: 1af4 00:09:26.135 Serial Number: 12343 00:09:26.135 Model Number: QEMU NVMe Ctrl 00:09:26.135 Firmware Version: 8.0.0 00:09:26.135 Recommended Arb Burst: 6 00:09:26.135 IEEE OUI Identifier: 00 54 52 00:09:26.135 Multi-path I/O 00:09:26.135 May have multiple subsystem ports: No 00:09:26.135 May have multiple controllers: Yes 00:09:26.135 Associated with SR-IOV VF: No 00:09:26.135 Max Data Transfer Size: 524288 00:09:26.135 Max Number of Namespaces: 256 00:09:26.135 Max Number of I/O Queues: 64 00:09:26.135 NVMe Specification Version (VS): 1.4 00:09:26.135 NVMe Specification Version (Identify): 1.4 00:09:26.135 Maximum Queue Entries: 2048 00:09:26.135 Contiguous Queues Required: Yes 00:09:26.135 Arbitration Mechanisms Supported 00:09:26.135 Weighted Round Robin: Not Supported 00:09:26.135 Vendor Specific: Not Supported 00:09:26.135 Reset Timeout: 7500 ms 00:09:26.135 Doorbell Stride: 4 bytes 00:09:26.135 NVM Subsystem Reset: Not Supported 00:09:26.135 Command Sets Supported 00:09:26.135 NVM Command Set: Supported 00:09:26.135 Boot Partition: Not Supported 00:09:26.135 Memory Page Size Minimum: 4096 bytes 00:09:26.135 Memory Page Size Maximum: 65536 bytes 00:09:26.135 Persistent Memory Region: Not Supported 00:09:26.135 Optional Asynchronous Events Supported 00:09:26.135 Namespace Attribute Notices: Supported 00:09:26.135 Firmware Activation Notices: Not Supported 00:09:26.135 ANA Change Notices: Not Supported 00:09:26.135 PLE Aggregate Log Change Notices: Not Supported 00:09:26.135 LBA Status Info Alert Notices: Not Supported 00:09:26.135 EGE Aggregate Log Change Notices: Not Supported 00:09:26.135 Normal NVM Subsystem Shutdown event: Not Supported 00:09:26.135 Zone Descriptor Change Notices: Not Supported 00:09:26.135 Discovery Log Change Notices: Not Supported 00:09:26.135 Controller Attributes 00:09:26.135 128-bit Host Identifier: Not Supported 00:09:26.135 Non-Operational Permissive Mode: Not Supported 00:09:26.135 NVM Sets: Not Supported 00:09:26.135 Read Recovery Levels: Not Supported 00:09:26.135 Endurance Groups: Supported 00:09:26.135 Predictable Latency Mode: Not Supported 00:09:26.135 Traffic Based Keep ALive: Not Supported 00:09:26.135 Namespace Granularity: Not Supported 00:09:26.135 SQ Associations: Not Supported 00:09:26.135 UUID List: Not Supported 00:09:26.135 Multi-Domain Subsystem: Not Supported 00:09:26.135 Fixed Capacity Management: Not Supported 00:09:26.135 Variable Capacity Management: Not Supported 00:09:26.135 Delete Endurance Group: Not Supported 00:09:26.135 Delete NVM Set: Not Supported 00:09:26.135 Extended LBA Formats Supported: Supported 00:09:26.135 Flexible Data Placement Supported: Supported 00:09:26.135 00:09:26.135 Controller Memory Buffer Support 00:09:26.135 ================================ 00:09:26.135 Supported: No 00:09:26.135 00:09:26.135 Persistent Memory Region Support 00:09:26.135 ================================ 00:09:26.135 Supported: No 00:09:26.135 00:09:26.135 Admin Command Set Attributes 00:09:26.135 ============================ 00:09:26.135 Security Send/Receive: Not Supported 00:09:26.135 Format NVM: Supported 00:09:26.135 Firmware Activate/Download: Not Supported 00:09:26.135 Namespace Management: Supported 00:09:26.135 Device Self-Test: Not Supported 00:09:26.135 Directives: Supported 00:09:26.135 NVMe-MI: Not Supported 00:09:26.135 Virtualization Management: Not Supported 00:09:26.136 Doorbell Buffer Config: Supported 00:09:26.136 Get LBA Status Capability: Not Supported 00:09:26.136 Command & Feature Lockdown Capability: Not Supported 00:09:26.136 Abort Command Limit: 4 00:09:26.136 Async Event Request Limit: 4 00:09:26.136 Number of Firmware Slots: N/A 00:09:26.136 Firmware Slot 1 Read-Only: N/A 00:09:26.136 Firmware Activation Without Reset: N/A 00:09:26.136 Multiple Update Detection Support: N/A 00:09:26.136 Firmware Update Granularity: No Information Provided 00:09:26.136 Per-Namespace SMART Log: Yes 00:09:26.136 Asymmetric Namespace Access Log Page: Not Supported 00:09:26.136 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:26.136 Command Effects Log Page: Supported 00:09:26.136 Get Log Page Extended Data: Supported 00:09:26.136 Telemetry Log Pages: Not Supported 00:09:26.136 Persistent Event Log Pages: Not Supported 00:09:26.136 Supported Log Pages Log Page: May Support 00:09:26.136 Commands Supported & Effects Log Page: Not Supported 00:09:26.136 Feature Identifiers & Effects Log Page:May Support 00:09:26.136 NVMe-MI Commands & Effects Log Page: May Support 00:09:26.136 Data Area 4 for Telemetry Log: Not Supported 00:09:26.136 Error Log Page Entries Supported: 1 00:09:26.136 Keep Alive: Not Supported 00:09:26.136 00:09:26.136 NVM Command Set Attributes 00:09:26.136 ========================== 00:09:26.136 Submission Queue Entry Size 00:09:26.136 Max: 64 00:09:26.136 Min: 64 00:09:26.136 Completion Queue Entry Size 00:09:26.136 Max: 16 00:09:26.136 Min: 16 00:09:26.136 Number of Namespaces: 256 00:09:26.136 Compare Command: Supported 00:09:26.136 Write Uncorrectable Command: Not Supported 00:09:26.136 Dataset Management Command: Supported 00:09:26.136 Write Zeroes Command: Supported 00:09:26.136 Set Features Save Field: Supported 00:09:26.136 Reservations: Not Supported 00:09:26.136 Timestamp: Supported 00:09:26.136 Copy: Supported 00:09:26.136 Volatile Write Cache: Present 00:09:26.136 Atomic Write Unit (Normal): 1 00:09:26.136 Atomic Write Unit (PFail): 1 00:09:26.136 Atomic Compare & Write Unit: 1 00:09:26.136 Fused Compare & Write: Not Supported 00:09:26.136 Scatter-Gather List 00:09:26.136 SGL Command Set: Supported 00:09:26.136 SGL Keyed: Not Supported 00:09:26.136 SGL Bit Bucket Descriptor: Not Supported 00:09:26.136 SGL Metadata Pointer: Not Supported 00:09:26.136 Oversized SGL: Not Supported 00:09:26.136 SGL Metadata Address: Not Supported 00:09:26.136 SGL Offset: Not Supported 00:09:26.136 Transport SGL Data Block: Not Supported 00:09:26.136 Replay Protected Memory Block: Not Supported 00:09:26.136 00:09:26.136 Firmware Slot Information 00:09:26.136 ========================= 00:09:26.136 Active slot: 1 00:09:26.136 Slot 1 Firmware Revision: 1.0 00:09:26.136 00:09:26.136 00:09:26.136 Commands Supported and Effects 00:09:26.136 ============================== 00:09:26.136 Admin Commands 00:09:26.136 -------------- 00:09:26.136 Delete I/O Submission Queue (00h): Supported 00:09:26.136 Create I/O Submission Queue (01h): Supported 00:09:26.136 Get Log Page (02h): Supported 00:09:26.136 Delete I/O Completion Queue (04h): Supported 00:09:26.136 Create I/O Completion Queue (05h): Supported 00:09:26.136 Identify (06h): Supported 00:09:26.136 Abort (08h): Supported 00:09:26.136 Set Features (09h): Supported 00:09:26.136 Get Features (0Ah): Supported 00:09:26.136 Asynchronous Event Request (0Ch): Supported 00:09:26.136 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:26.136 Directive Send (19h): Supported 00:09:26.136 Directive Receive (1Ah): Supported 00:09:26.136 Virtualization Management (1Ch): Supported 00:09:26.136 Doorbell Buffer Config (7Ch): Supported 00:09:26.136 Format NVM (80h): Supported LBA-Change 00:09:26.136 I/O Commands 00:09:26.136 ------------ 00:09:26.136 Flush (00h): Supported LBA-Change 00:09:26.136 Write (01h): Supported LBA-Change 00:09:26.136 Read (02h): Supported 00:09:26.136 Compare (05h): Supported 00:09:26.136 Write Zeroes (08h): Supported LBA-Change 00:09:26.136 Dataset Management (09h): Supported LBA-Change 00:09:26.136 Unknown (0Ch): Supported 00:09:26.136 Unknown (12h): Supported 00:09:26.136 Copy (19h): Supported LBA-Change 00:09:26.136 Unknown (1Dh): Supported LBA-Change 00:09:26.136 00:09:26.136 Error Log 00:09:26.136 ========= 00:09:26.136 00:09:26.136 Arbitration 00:09:26.136 =========== 00:09:26.136 Arbitration Burst: no limit 00:09:26.136 00:09:26.136 Power Management 00:09:26.136 ================ 00:09:26.136 Number of Power States: 1 00:09:26.136 Current Power State: Power State #0 00:09:26.136 Power State #0: 00:09:26.136 Max Power: 25.00 W 00:09:26.136 Non-Operational State: Operational 00:09:26.136 Entry Latency: 16 microseconds 00:09:26.136 Exit Latency: 4 microseconds 00:09:26.136 Relative Read Throughput: 0 00:09:26.136 Relative Read Latency: 0 00:09:26.136 Relative Write Throughput: 0 00:09:26.136 Relative Write Latency: 0 00:09:26.136 Idle Power: Not Reported 00:09:26.136 Active Power: Not Reported 00:09:26.136 Non-Operational Permissive Mode: Not Supported 00:09:26.136 00:09:26.136 Health Information 00:09:26.136 ================== 00:09:26.136 Critical Warnings: 00:09:26.136 Available Spare Space: OK 00:09:26.136 Temperature: OK 00:09:26.136 Device Reliability: OK 00:09:26.136 Read Only: No 00:09:26.136 Volatile Memory Backup: OK 00:09:26.136 Current Temperature: 323 Kelvin (50 Celsius) 00:09:26.136 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:26.136 Available Spare: 0% 00:09:26.136 Available Spare Threshold: 0% 00:09:26.136 Life Percentage Used: 0% 00:09:26.136 Data Units Read: 852 00:09:26.136 Data Units Written: 781 00:09:26.136 Host Read Commands: 35650 00:09:26.136 Host Write Commands: 35073 00:09:26.136 Controller Busy Time: 0 minutes 00:09:26.136 Power Cycles: 0 00:09:26.136 Power On Hours: 0 hours 00:09:26.136 Unsafe Shutdowns: 0 00:09:26.136 Unrecoverable Media Errors: 0 00:09:26.136 Lifetime Error Log Entries: 0 00:09:26.136 Warning Temperature Time: 0 minutes 00:09:26.136 Critical Temperature Time: 0 minutes 00:09:26.136 00:09:26.136 Number of Queues 00:09:26.136 ================ 00:09:26.136 Number of I/O Submission Queues: 64 00:09:26.136 Number of I/O Completion Queues: 64 00:09:26.136 00:09:26.136 ZNS Specific Controller Data 00:09:26.136 ============================ 00:09:26.136 Zone Append Size Limit: 0 00:09:26.136 00:09:26.136 00:09:26.136 Active Namespaces 00:09:26.136 ================= 00:09:26.136 Namespace ID:1 00:09:26.136 Error Recovery Timeout: Unlimited 00:09:26.136 Command Set Identifier: NVM (00h) 00:09:26.136 Deallocate: Supported 00:09:26.136 Deallocated/Unwritten Error: Supported 00:09:26.136 Deallocated Read Value: All 0x00 00:09:26.136 Deallocate in Write Zeroes: Not Supported 00:09:26.136 Deallocated Guard Field: 0xFFFF 00:09:26.136 Flush: Supported 00:09:26.136 Reservation: Not Supported 00:09:26.136 Namespace Sharing Capabilities: Multiple Controllers 00:09:26.136 Size (in LBAs): 262144 (1GiB) 00:09:26.136 Capacity (in LBAs): 262144 (1GiB) 00:09:26.136 Utilization (in LBAs): 262144 (1GiB) 00:09:26.136 Thin Provisioning: Not Supported 00:09:26.136 Per-NS Atomic Units: No 00:09:26.136 Maximum Single Source Range Length: 128 00:09:26.136 Maximum Copy Length: 128 00:09:26.136 Maximum Source Range Count: 128 00:09:26.136 NGUID/EUI64 Never Reused: No 00:09:26.136 Namespace Write Protected: No 00:09:26.136 Endurance group ID: 1 00:09:26.136 Number of LBA Formats: 8 00:09:26.136 Current LBA Format: LBA Format #04 00:09:26.136 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:26.136 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:26.136 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:26.136 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:26.136 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:26.136 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:26.136 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:26.136 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:26.136 00:09:26.136 Get Feature FDP: 00:09:26.136 ================ 00:09:26.136 Enabled: Yes 00:09:26.136 FDP configuration index: 0 00:09:26.136 00:09:26.136 FDP configurations log page 00:09:26.136 =========================== 00:09:26.136 Number of FDP configurations: 1 00:09:26.136 Version: 0 00:09:26.136 Size: 112 00:09:26.136 FDP Configuration Descriptor: 0 00:09:26.136 Descriptor Size: 96 00:09:26.136 Reclaim Group Identifier format: 2 00:09:26.136 FDP Volatile Write Cache: Not Present 00:09:26.136 FDP Configuration: Valid 00:09:26.136 Vendor Specific Size: 0 00:09:26.136 Number of Reclaim Groups: 2 00:09:26.136 Number of Recalim Unit Handles: 8 00:09:26.136 Max Placement Identifiers: 128 00:09:26.136 Number of Namespaces Suppprted: 256 00:09:26.136 Reclaim unit Nominal Size: 6000000 bytes 00:09:26.136 Estimated Reclaim Unit Time Limit: Not Reported 00:09:26.136 RUH Desc #000: RUH Type: Initially Isolated 00:09:26.137 RUH Desc #001: RUH Type: Initially Isolated 00:09:26.137 RUH Desc #002: RUH Type: Initially Isolated 00:09:26.137 RUH Desc #003: RUH Type: Initially Isolated 00:09:26.137 RUH Desc #004: RUH Type: Initially Isolated 00:09:26.137 RUH Desc #005: RUH Type: Initially Isolated 00:09:26.137 RUH Desc #006: RUH Type: Initially Isolated 00:09:26.137 RUH Desc #007: RUH Type: Initially Isolated 00:09:26.137 00:09:26.137 FDP reclaim unit handle usage log page 00:09:26.137 ====================================== 00:09:26.137 Number of Reclaim Unit Handles: 8 00:09:26.137 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:26.137 RUH Usage Desc #001: RUH Attributes: Unused 00:09:26.137 RUH Usage Desc #002: RUH Attributes: Unused 00:09:26.137 RUH Usage Desc #003: RUH Attributes: Unused 00:09:26.137 RUH Usage Desc #004: RUH Attributes: Unused 00:09:26.137 RUH Usage Desc #005: RUH Attributes: Unused 00:09:26.137 RUH Usage Desc #006: RUH Attributes: Unused 00:09:26.137 RUH Usage Desc #007: RUH Attributes: Unused 00:09:26.137 00:09:26.137 FDP statistics log page 00:09:26.137 ======================= 00:09:26.137 Host bytes with metadata written: 500342784 00:09:26.137 Med[2024-10-01 15:09:24.612753] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 76115 terminated unexpected 00:09:26.137 ia bytes with metadata written: 500396032 00:09:26.137 Media bytes erased: 0 00:09:26.137 00:09:26.137 FDP events log page 00:09:26.137 =================== 00:09:26.137 Number of FDP events: 0 00:09:26.137 00:09:26.137 NVM Specific Namespace Data 00:09:26.137 =========================== 00:09:26.137 Logical Block Storage Tag Mask: 0 00:09:26.137 Protection Information Capabilities: 00:09:26.137 16b Guard Protection Information Storage Tag Support: No 00:09:26.137 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:26.137 Storage Tag Check Read Support: No 00:09:26.137 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.137 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.137 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.137 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.137 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.137 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.137 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.137 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.137 ===================================================== 00:09:26.137 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:26.137 ===================================================== 00:09:26.137 Controller Capabilities/Features 00:09:26.137 ================================ 00:09:26.137 Vendor ID: 1b36 00:09:26.137 Subsystem Vendor ID: 1af4 00:09:26.137 Serial Number: 12342 00:09:26.137 Model Number: QEMU NVMe Ctrl 00:09:26.137 Firmware Version: 8.0.0 00:09:26.137 Recommended Arb Burst: 6 00:09:26.137 IEEE OUI Identifier: 00 54 52 00:09:26.137 Multi-path I/O 00:09:26.137 May have multiple subsystem ports: No 00:09:26.137 May have multiple controllers: No 00:09:26.137 Associated with SR-IOV VF: No 00:09:26.137 Max Data Transfer Size: 524288 00:09:26.137 Max Number of Namespaces: 256 00:09:26.137 Max Number of I/O Queues: 64 00:09:26.137 NVMe Specification Version (VS): 1.4 00:09:26.137 NVMe Specification Version (Identify): 1.4 00:09:26.137 Maximum Queue Entries: 2048 00:09:26.137 Contiguous Queues Required: Yes 00:09:26.137 Arbitration Mechanisms Supported 00:09:26.137 Weighted Round Robin: Not Supported 00:09:26.137 Vendor Specific: Not Supported 00:09:26.137 Reset Timeout: 7500 ms 00:09:26.137 Doorbell Stride: 4 bytes 00:09:26.137 NVM Subsystem Reset: Not Supported 00:09:26.137 Command Sets Supported 00:09:26.137 NVM Command Set: Supported 00:09:26.137 Boot Partition: Not Supported 00:09:26.137 Memory Page Size Minimum: 4096 bytes 00:09:26.137 Memory Page Size Maximum: 65536 bytes 00:09:26.137 Persistent Memory Region: Not Supported 00:09:26.137 Optional Asynchronous Events Supported 00:09:26.137 Namespace Attribute Notices: Supported 00:09:26.137 Firmware Activation Notices: Not Supported 00:09:26.137 ANA Change Notices: Not Supported 00:09:26.137 PLE Aggregate Log Change Notices: Not Supported 00:09:26.137 LBA Status Info Alert Notices: Not Supported 00:09:26.137 EGE Aggregate Log Change Notices: Not Supported 00:09:26.137 Normal NVM Subsystem Shutdown event: Not Supported 00:09:26.137 Zone Descriptor Change Notices: Not Supported 00:09:26.137 Discovery Log Change Notices: Not Supported 00:09:26.137 Controller Attributes 00:09:26.137 128-bit Host Identifier: Not Supported 00:09:26.137 Non-Operational Permissive Mode: Not Supported 00:09:26.137 NVM Sets: Not Supported 00:09:26.137 Read Recovery Levels: Not Supported 00:09:26.137 Endurance Groups: Not Supported 00:09:26.137 Predictable Latency Mode: Not Supported 00:09:26.137 Traffic Based Keep ALive: Not Supported 00:09:26.137 Namespace Granularity: Not Supported 00:09:26.137 SQ Associations: Not Supported 00:09:26.137 UUID List: Not Supported 00:09:26.137 Multi-Domain Subsystem: Not Supported 00:09:26.137 Fixed Capacity Management: Not Supported 00:09:26.137 Variable Capacity Management: Not Supported 00:09:26.137 Delete Endurance Group: Not Supported 00:09:26.137 Delete NVM Set: Not Supported 00:09:26.137 Extended LBA Formats Supported: Supported 00:09:26.137 Flexible Data Placement Supported: Not Supported 00:09:26.137 00:09:26.137 Controller Memory Buffer Support 00:09:26.137 ================================ 00:09:26.137 Supported: No 00:09:26.137 00:09:26.137 Persistent Memory Region Support 00:09:26.137 ================================ 00:09:26.137 Supported: No 00:09:26.137 00:09:26.137 Admin Command Set Attributes 00:09:26.137 ============================ 00:09:26.137 Security Send/Receive: Not Supported 00:09:26.137 Format NVM: Supported 00:09:26.137 Firmware Activate/Download: Not Supported 00:09:26.137 Namespace Management: Supported 00:09:26.137 Device Self-Test: Not Supported 00:09:26.137 Directives: Supported 00:09:26.137 NVMe-MI: Not Supported 00:09:26.137 Virtualization Management: Not Supported 00:09:26.137 Doorbell Buffer Config: Supported 00:09:26.137 Get LBA Status Capability: Not Supported 00:09:26.137 Command & Feature Lockdown Capability: Not Supported 00:09:26.137 Abort Command Limit: 4 00:09:26.137 Async Event Request Limit: 4 00:09:26.137 Number of Firmware Slots: N/A 00:09:26.137 Firmware Slot 1 Read-Only: N/A 00:09:26.137 Firmware Activation Without Reset: N/A 00:09:26.137 Multiple Update Detection Support: N/A 00:09:26.137 Firmware Update Granularity: No Information Provided 00:09:26.137 Per-Namespace SMART Log: Yes 00:09:26.137 Asymmetric Namespace Access Log Page: Not Supported 00:09:26.137 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:26.137 Command Effects Log Page: Supported 00:09:26.137 Get Log Page Extended Data: Supported 00:09:26.137 Telemetry Log Pages: Not Supported 00:09:26.137 Persistent Event Log Pages: Not Supported 00:09:26.137 Supported Log Pages Log Page: May Support 00:09:26.137 Commands Supported & Effects Log Page: Not Supported 00:09:26.137 Feature Identifiers & Effects Log Page:May Support 00:09:26.137 NVMe-MI Commands & Effects Log Page: May Support 00:09:26.137 Data Area 4 for Telemetry Log: Not Supported 00:09:26.137 Error Log Page Entries Supported: 1 00:09:26.137 Keep Alive: Not Supported 00:09:26.137 00:09:26.137 NVM Command Set Attributes 00:09:26.137 ========================== 00:09:26.137 Submission Queue Entry Size 00:09:26.137 Max: 64 00:09:26.137 Min: 64 00:09:26.137 Completion Queue Entry Size 00:09:26.137 Max: 16 00:09:26.137 Min: 16 00:09:26.137 Number of Namespaces: 256 00:09:26.137 Compare Command: Supported 00:09:26.137 Write Uncorrectable Command: Not Supported 00:09:26.137 Dataset Management Command: Supported 00:09:26.137 Write Zeroes Command: Supported 00:09:26.137 Set Features Save Field: Supported 00:09:26.138 Reservations: Not Supported 00:09:26.138 Timestamp: Supported 00:09:26.138 Copy: Supported 00:09:26.138 Volatile Write Cache: Present 00:09:26.138 Atomic Write Unit (Normal): 1 00:09:26.138 Atomic Write Unit (PFail): 1 00:09:26.138 Atomic Compare & Write Unit: 1 00:09:26.138 Fused Compare & Write: Not Supported 00:09:26.138 Scatter-Gather List 00:09:26.138 SGL Command Set: Supported 00:09:26.138 SGL Keyed: Not Supported 00:09:26.138 SGL Bit Bucket Descriptor: Not Supported 00:09:26.138 SGL Metadata Pointer: Not Supported 00:09:26.138 Oversized SGL: Not Supported 00:09:26.138 SGL Metadata Address: Not Supported 00:09:26.138 SGL Offset: Not Supported 00:09:26.138 Transport SGL Data Block: Not Supported 00:09:26.138 Replay Protected Memory Block: Not Supported 00:09:26.138 00:09:26.138 Firmware Slot Information 00:09:26.138 ========================= 00:09:26.138 Active slot: 1 00:09:26.138 Slot 1 Firmware Revision: 1.0 00:09:26.138 00:09:26.138 00:09:26.138 Commands Supported and Effects 00:09:26.138 ============================== 00:09:26.138 Admin Commands 00:09:26.138 -------------- 00:09:26.138 Delete I/O Submission Queue (00h): Supported 00:09:26.138 Create I/O Submission Queue (01h): Supported 00:09:26.138 Get Log Page (02h): Supported 00:09:26.138 Delete I/O Completion Queue (04h): Supported 00:09:26.138 Create I/O Completion Queue (05h): Supported 00:09:26.138 Identify (06h): Supported 00:09:26.138 Abort (08h): Supported 00:09:26.138 Set Features (09h): Supported 00:09:26.138 Get Features (0Ah): Supported 00:09:26.138 Asynchronous Event Request (0Ch): Supported 00:09:26.138 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:26.138 Directive Send (19h): Supported 00:09:26.138 Directive Receive (1Ah): Supported 00:09:26.138 Virtualization Management (1Ch): Supported 00:09:26.138 Doorbell Buffer Config (7Ch): Supported 00:09:26.138 Format NVM (80h): Supported LBA-Change 00:09:26.138 I/O Commands 00:09:26.138 ------------ 00:09:26.138 Flush (00h): Supported LBA-Change 00:09:26.138 Write (01h): Supported LBA-Change 00:09:26.138 Read (02h): Supported 00:09:26.138 Compare (05h): Supported 00:09:26.138 Write Zeroes (08h): Supported LBA-Change 00:09:26.138 Dataset Management (09h): Supported LBA-Change 00:09:26.138 Unknown (0Ch): Supported 00:09:26.138 Unknown (12h): Supported 00:09:26.138 Copy (19h): Supported LBA-Change 00:09:26.138 Unknown (1Dh): Supported LBA-Change 00:09:26.138 00:09:26.138 Error Log 00:09:26.138 ========= 00:09:26.138 00:09:26.138 Arbitration 00:09:26.138 =========== 00:09:26.138 Arbitration Burst: no limit 00:09:26.138 00:09:26.138 Power Management 00:09:26.138 ================ 00:09:26.138 Number of Power States: 1 00:09:26.138 Current Power State: Power State #0 00:09:26.138 Power State #0: 00:09:26.138 Max Power: 25.00 W 00:09:26.138 Non-Operational State: Operational 00:09:26.138 Entry Latency: 16 microseconds 00:09:26.138 Exit Latency: 4 microseconds 00:09:26.138 Relative Read Throughput: 0 00:09:26.138 Relative Read Latency: 0 00:09:26.138 Relative Write Throughput: 0 00:09:26.138 Relative Write Latency: 0 00:09:26.138 Idle Power: Not Reported 00:09:26.138 Active Power: Not Reported 00:09:26.138 Non-Operational Permissive Mode: Not Supported 00:09:26.138 00:09:26.138 Health Information 00:09:26.138 ================== 00:09:26.138 Critical Warnings: 00:09:26.138 Available Spare Space: OK 00:09:26.138 Temperature: OK 00:09:26.138 Device Reliability: OK 00:09:26.138 Read Only: No 00:09:26.138 Volatile Memory Backup: OK 00:09:26.138 Current Temperature: 323 Kelvin (50 Celsius) 00:09:26.138 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:26.138 Available Spare: 0% 00:09:26.138 Available Spare Threshold: 0% 00:09:26.138 Life Percentage Used: 0% 00:09:26.138 Data Units Read: 2349 00:09:26.138 Data Units Written: 2136 00:09:26.138 Host Read Commands: 104932 00:09:26.138 Host Write Commands: 103201 00:09:26.138 Controller Busy Time: 0 minutes 00:09:26.138 Power Cycles: 0 00:09:26.138 Power On Hours: 0 hours 00:09:26.138 Unsafe Shutdowns: 0 00:09:26.138 Unrecoverable Media Errors: 0 00:09:26.138 Lifetime Error Log Entries: 0 00:09:26.138 Warning Temperature Time: 0 minutes 00:09:26.138 Critical Temperature Time: 0 minutes 00:09:26.138 00:09:26.138 Number of Queues 00:09:26.138 ================ 00:09:26.138 Number of I/O Submission Queues: 64 00:09:26.138 Number of I/O Completion Queues: 64 00:09:26.138 00:09:26.138 ZNS Specific Controller Data 00:09:26.138 ============================ 00:09:26.138 Zone Append Size Limit: 0 00:09:26.138 00:09:26.138 00:09:26.138 Active Namespaces 00:09:26.138 ================= 00:09:26.138 Namespace ID:1 00:09:26.138 Error Recovery Timeout: Unlimited 00:09:26.138 Command Set Identifier: NVM (00h) 00:09:26.138 Deallocate: Supported 00:09:26.138 Deallocated/Unwritten Error: Supported 00:09:26.138 Deallocated Read Value: All 0x00 00:09:26.138 Deallocate in Write Zeroes: Not Supported 00:09:26.138 Deallocated Guard Field: 0xFFFF 00:09:26.138 Flush: Supported 00:09:26.138 Reservation: Not Supported 00:09:26.138 Namespace Sharing Capabilities: Private 00:09:26.138 Size (in LBAs): 1048576 (4GiB) 00:09:26.138 Capacity (in LBAs): 1048576 (4GiB) 00:09:26.138 Utilization (in LBAs): 1048576 (4GiB) 00:09:26.138 Thin Provisioning: Not Supported 00:09:26.138 Per-NS Atomic Units: No 00:09:26.138 Maximum Single Source Range Length: 128 00:09:26.138 Maximum Copy Length: 128 00:09:26.138 Maximum Source Range Count: 128 00:09:26.138 NGUID/EUI64 Never Reused: No 00:09:26.138 Namespace Write Protected: No 00:09:26.138 Number of LBA Formats: 8 00:09:26.138 Current LBA Format: LBA Format #04 00:09:26.138 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:26.138 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:26.138 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:26.138 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:26.138 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:26.138 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:26.138 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:26.138 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:26.138 00:09:26.138 NVM Specific Namespace Data 00:09:26.138 =========================== 00:09:26.138 Logical Block Storage Tag Mask: 0 00:09:26.138 Protection Information Capabilities: 00:09:26.138 16b Guard Protection Information Storage Tag Support: No 00:09:26.138 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:26.138 Storage Tag Check Read Support: No 00:09:26.138 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.138 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.138 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.138 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.138 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.138 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.138 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.138 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.138 Namespace ID:2 00:09:26.138 Error Recovery Timeout: Unlimited 00:09:26.138 Command Set Identifier: NVM (00h) 00:09:26.138 Deallocate: Supported 00:09:26.138 Deallocated/Unwritten Error: Supported 00:09:26.138 Deallocated Read Value: All 0x00 00:09:26.138 Deallocate in Write Zeroes: Not Supported 00:09:26.138 Deallocated Guard Field: 0xFFFF 00:09:26.138 Flush: Supported 00:09:26.138 Reservation: Not Supported 00:09:26.138 Namespace Sharing Capabilities: Private 00:09:26.138 Size (in LBAs): 1048576 (4GiB) 00:09:26.138 Capacity (in LBAs): 1048576 (4GiB) 00:09:26.138 Utilization (in LBAs): 1048576 (4GiB) 00:09:26.138 Thin Provisioning: Not Supported 00:09:26.138 Per-NS Atomic Units: No 00:09:26.138 Maximum Single Source Range Length: 128 00:09:26.138 Maximum Copy Length: 128 00:09:26.138 Maximum Source Range Count: 128 00:09:26.138 NGUID/EUI64 Never Reused: No 00:09:26.138 Namespace Write Protected: No 00:09:26.138 Number of LBA Formats: 8 00:09:26.138 Current LBA Format: LBA Format #04 00:09:26.138 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:26.138 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:26.138 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:26.138 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:26.138 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:26.138 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:26.138 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:26.138 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:26.138 00:09:26.138 NVM Specific Namespace Data 00:09:26.138 =========================== 00:09:26.138 Logical Block Storage Tag Mask: 0 00:09:26.138 Protection Information Capabilities: 00:09:26.138 16b Guard Protection Information Storage Tag Support: No 00:09:26.138 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:26.138 Storage Tag Check Read Support: No 00:09:26.139 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Namespace ID:3 00:09:26.139 Error Recovery Timeout: Unlimited 00:09:26.139 Command Set Identifier: NVM (00h) 00:09:26.139 Deallocate: Supported 00:09:26.139 Deallocated/Unwritten Error: Supported 00:09:26.139 Deallocated Read Value: All 0x00 00:09:26.139 Deallocate in Write Zeroes: Not Supported 00:09:26.139 Deallocated Guard Field: 0xFFFF 00:09:26.139 Flush: Supported 00:09:26.139 Reservation: Not Supported 00:09:26.139 Namespace Sharing Capabilities: Private 00:09:26.139 Size (in LBAs): 1048576 (4GiB) 00:09:26.139 Capacity (in LBAs): 1048576 (4GiB) 00:09:26.139 Utilization (in LBAs): 1048576 (4GiB) 00:09:26.139 Thin Provisioning: Not Supported 00:09:26.139 Per-NS Atomic Units: No 00:09:26.139 Maximum Single Source Range Length: 128 00:09:26.139 Maximum Copy Length: 128 00:09:26.139 Maximum Source Range Count: 128 00:09:26.139 NGUID/EUI64 Never Reused: No 00:09:26.139 Namespace Write Protected: No 00:09:26.139 Number of LBA Formats: 8 00:09:26.139 Current LBA Format: LBA Format #04 00:09:26.139 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:26.139 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:26.139 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:26.139 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:26.139 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:26.139 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:26.139 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:26.139 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:26.139 00:09:26.139 NVM Specific Namespace Data 00:09:26.139 =========================== 00:09:26.139 Logical Block Storage Tag Mask: 0 00:09:26.139 Protection Information Capabilities: 00:09:26.139 16b Guard Protection Information Storage Tag Support: No 00:09:26.139 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:26.139 Storage Tag Check Read Support: No 00:09:26.139 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.139 15:09:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:26.139 15:09:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:26.398 ===================================================== 00:09:26.398 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:26.398 ===================================================== 00:09:26.398 Controller Capabilities/Features 00:09:26.398 ================================ 00:09:26.398 Vendor ID: 1b36 00:09:26.398 Subsystem Vendor ID: 1af4 00:09:26.398 Serial Number: 12340 00:09:26.398 Model Number: QEMU NVMe Ctrl 00:09:26.398 Firmware Version: 8.0.0 00:09:26.398 Recommended Arb Burst: 6 00:09:26.399 IEEE OUI Identifier: 00 54 52 00:09:26.399 Multi-path I/O 00:09:26.399 May have multiple subsystem ports: No 00:09:26.399 May have multiple controllers: No 00:09:26.399 Associated with SR-IOV VF: No 00:09:26.399 Max Data Transfer Size: 524288 00:09:26.399 Max Number of Namespaces: 256 00:09:26.399 Max Number of I/O Queues: 64 00:09:26.399 NVMe Specification Version (VS): 1.4 00:09:26.399 NVMe Specification Version (Identify): 1.4 00:09:26.399 Maximum Queue Entries: 2048 00:09:26.399 Contiguous Queues Required: Yes 00:09:26.399 Arbitration Mechanisms Supported 00:09:26.399 Weighted Round Robin: Not Supported 00:09:26.399 Vendor Specific: Not Supported 00:09:26.399 Reset Timeout: 7500 ms 00:09:26.399 Doorbell Stride: 4 bytes 00:09:26.399 NVM Subsystem Reset: Not Supported 00:09:26.399 Command Sets Supported 00:09:26.399 NVM Command Set: Supported 00:09:26.399 Boot Partition: Not Supported 00:09:26.399 Memory Page Size Minimum: 4096 bytes 00:09:26.399 Memory Page Size Maximum: 65536 bytes 00:09:26.399 Persistent Memory Region: Not Supported 00:09:26.399 Optional Asynchronous Events Supported 00:09:26.399 Namespace Attribute Notices: Supported 00:09:26.399 Firmware Activation Notices: Not Supported 00:09:26.399 ANA Change Notices: Not Supported 00:09:26.399 PLE Aggregate Log Change Notices: Not Supported 00:09:26.399 LBA Status Info Alert Notices: Not Supported 00:09:26.399 EGE Aggregate Log Change Notices: Not Supported 00:09:26.399 Normal NVM Subsystem Shutdown event: Not Supported 00:09:26.399 Zone Descriptor Change Notices: Not Supported 00:09:26.399 Discovery Log Change Notices: Not Supported 00:09:26.399 Controller Attributes 00:09:26.399 128-bit Host Identifier: Not Supported 00:09:26.399 Non-Operational Permissive Mode: Not Supported 00:09:26.399 NVM Sets: Not Supported 00:09:26.399 Read Recovery Levels: Not Supported 00:09:26.399 Endurance Groups: Not Supported 00:09:26.399 Predictable Latency Mode: Not Supported 00:09:26.399 Traffic Based Keep ALive: Not Supported 00:09:26.399 Namespace Granularity: Not Supported 00:09:26.399 SQ Associations: Not Supported 00:09:26.399 UUID List: Not Supported 00:09:26.399 Multi-Domain Subsystem: Not Supported 00:09:26.399 Fixed Capacity Management: Not Supported 00:09:26.399 Variable Capacity Management: Not Supported 00:09:26.399 Delete Endurance Group: Not Supported 00:09:26.399 Delete NVM Set: Not Supported 00:09:26.399 Extended LBA Formats Supported: Supported 00:09:26.399 Flexible Data Placement Supported: Not Supported 00:09:26.399 00:09:26.399 Controller Memory Buffer Support 00:09:26.399 ================================ 00:09:26.399 Supported: No 00:09:26.399 00:09:26.399 Persistent Memory Region Support 00:09:26.399 ================================ 00:09:26.399 Supported: No 00:09:26.399 00:09:26.399 Admin Command Set Attributes 00:09:26.399 ============================ 00:09:26.399 Security Send/Receive: Not Supported 00:09:26.399 Format NVM: Supported 00:09:26.399 Firmware Activate/Download: Not Supported 00:09:26.399 Namespace Management: Supported 00:09:26.399 Device Self-Test: Not Supported 00:09:26.399 Directives: Supported 00:09:26.399 NVMe-MI: Not Supported 00:09:26.399 Virtualization Management: Not Supported 00:09:26.399 Doorbell Buffer Config: Supported 00:09:26.399 Get LBA Status Capability: Not Supported 00:09:26.399 Command & Feature Lockdown Capability: Not Supported 00:09:26.399 Abort Command Limit: 4 00:09:26.399 Async Event Request Limit: 4 00:09:26.399 Number of Firmware Slots: N/A 00:09:26.399 Firmware Slot 1 Read-Only: N/A 00:09:26.399 Firmware Activation Without Reset: N/A 00:09:26.399 Multiple Update Detection Support: N/A 00:09:26.399 Firmware Update Granularity: No Information Provided 00:09:26.399 Per-Namespace SMART Log: Yes 00:09:26.399 Asymmetric Namespace Access Log Page: Not Supported 00:09:26.399 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:26.399 Command Effects Log Page: Supported 00:09:26.399 Get Log Page Extended Data: Supported 00:09:26.399 Telemetry Log Pages: Not Supported 00:09:26.399 Persistent Event Log Pages: Not Supported 00:09:26.399 Supported Log Pages Log Page: May Support 00:09:26.399 Commands Supported & Effects Log Page: Not Supported 00:09:26.399 Feature Identifiers & Effects Log Page:May Support 00:09:26.399 NVMe-MI Commands & Effects Log Page: May Support 00:09:26.399 Data Area 4 for Telemetry Log: Not Supported 00:09:26.399 Error Log Page Entries Supported: 1 00:09:26.399 Keep Alive: Not Supported 00:09:26.399 00:09:26.399 NVM Command Set Attributes 00:09:26.399 ========================== 00:09:26.399 Submission Queue Entry Size 00:09:26.399 Max: 64 00:09:26.399 Min: 64 00:09:26.399 Completion Queue Entry Size 00:09:26.399 Max: 16 00:09:26.399 Min: 16 00:09:26.399 Number of Namespaces: 256 00:09:26.399 Compare Command: Supported 00:09:26.399 Write Uncorrectable Command: Not Supported 00:09:26.399 Dataset Management Command: Supported 00:09:26.399 Write Zeroes Command: Supported 00:09:26.399 Set Features Save Field: Supported 00:09:26.399 Reservations: Not Supported 00:09:26.399 Timestamp: Supported 00:09:26.399 Copy: Supported 00:09:26.399 Volatile Write Cache: Present 00:09:26.399 Atomic Write Unit (Normal): 1 00:09:26.399 Atomic Write Unit (PFail): 1 00:09:26.399 Atomic Compare & Write Unit: 1 00:09:26.399 Fused Compare & Write: Not Supported 00:09:26.399 Scatter-Gather List 00:09:26.399 SGL Command Set: Supported 00:09:26.399 SGL Keyed: Not Supported 00:09:26.399 SGL Bit Bucket Descriptor: Not Supported 00:09:26.399 SGL Metadata Pointer: Not Supported 00:09:26.399 Oversized SGL: Not Supported 00:09:26.399 SGL Metadata Address: Not Supported 00:09:26.399 SGL Offset: Not Supported 00:09:26.399 Transport SGL Data Block: Not Supported 00:09:26.399 Replay Protected Memory Block: Not Supported 00:09:26.399 00:09:26.399 Firmware Slot Information 00:09:26.399 ========================= 00:09:26.399 Active slot: 1 00:09:26.399 Slot 1 Firmware Revision: 1.0 00:09:26.399 00:09:26.399 00:09:26.399 Commands Supported and Effects 00:09:26.399 ============================== 00:09:26.399 Admin Commands 00:09:26.399 -------------- 00:09:26.399 Delete I/O Submission Queue (00h): Supported 00:09:26.399 Create I/O Submission Queue (01h): Supported 00:09:26.399 Get Log Page (02h): Supported 00:09:26.399 Delete I/O Completion Queue (04h): Supported 00:09:26.399 Create I/O Completion Queue (05h): Supported 00:09:26.399 Identify (06h): Supported 00:09:26.399 Abort (08h): Supported 00:09:26.399 Set Features (09h): Supported 00:09:26.399 Get Features (0Ah): Supported 00:09:26.399 Asynchronous Event Request (0Ch): Supported 00:09:26.399 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:26.399 Directive Send (19h): Supported 00:09:26.399 Directive Receive (1Ah): Supported 00:09:26.399 Virtualization Management (1Ch): Supported 00:09:26.399 Doorbell Buffer Config (7Ch): Supported 00:09:26.399 Format NVM (80h): Supported LBA-Change 00:09:26.399 I/O Commands 00:09:26.399 ------------ 00:09:26.399 Flush (00h): Supported LBA-Change 00:09:26.399 Write (01h): Supported LBA-Change 00:09:26.399 Read (02h): Supported 00:09:26.399 Compare (05h): Supported 00:09:26.399 Write Zeroes (08h): Supported LBA-Change 00:09:26.399 Dataset Management (09h): Supported LBA-Change 00:09:26.399 Unknown (0Ch): Supported 00:09:26.399 Unknown (12h): Supported 00:09:26.399 Copy (19h): Supported LBA-Change 00:09:26.399 Unknown (1Dh): Supported LBA-Change 00:09:26.399 00:09:26.399 Error Log 00:09:26.399 ========= 00:09:26.399 00:09:26.399 Arbitration 00:09:26.399 =========== 00:09:26.399 Arbitration Burst: no limit 00:09:26.399 00:09:26.399 Power Management 00:09:26.399 ================ 00:09:26.399 Number of Power States: 1 00:09:26.399 Current Power State: Power State #0 00:09:26.399 Power State #0: 00:09:26.399 Max Power: 25.00 W 00:09:26.399 Non-Operational State: Operational 00:09:26.399 Entry Latency: 16 microseconds 00:09:26.399 Exit Latency: 4 microseconds 00:09:26.399 Relative Read Throughput: 0 00:09:26.399 Relative Read Latency: 0 00:09:26.399 Relative Write Throughput: 0 00:09:26.399 Relative Write Latency: 0 00:09:26.658 Idle Power: Not Reported 00:09:26.658 Active Power: Not Reported 00:09:26.658 Non-Operational Permissive Mode: Not Supported 00:09:26.658 00:09:26.658 Health Information 00:09:26.658 ================== 00:09:26.658 Critical Warnings: 00:09:26.658 Available Spare Space: OK 00:09:26.658 Temperature: OK 00:09:26.658 Device Reliability: OK 00:09:26.658 Read Only: No 00:09:26.658 Volatile Memory Backup: OK 00:09:26.658 Current Temperature: 323 Kelvin (50 Celsius) 00:09:26.658 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:26.658 Available Spare: 0% 00:09:26.658 Available Spare Threshold: 0% 00:09:26.658 Life Percentage Used: 0% 00:09:26.658 Data Units Read: 738 00:09:26.658 Data Units Written: 666 00:09:26.658 Host Read Commands: 34347 00:09:26.658 Host Write Commands: 34133 00:09:26.658 Controller Busy Time: 0 minutes 00:09:26.658 Power Cycles: 0 00:09:26.658 Power On Hours: 0 hours 00:09:26.658 Unsafe Shutdowns: 0 00:09:26.658 Unrecoverable Media Errors: 0 00:09:26.658 Lifetime Error Log Entries: 0 00:09:26.658 Warning Temperature Time: 0 minutes 00:09:26.658 Critical Temperature Time: 0 minutes 00:09:26.658 00:09:26.658 Number of Queues 00:09:26.658 ================ 00:09:26.658 Number of I/O Submission Queues: 64 00:09:26.658 Number of I/O Completion Queues: 64 00:09:26.658 00:09:26.658 ZNS Specific Controller Data 00:09:26.658 ============================ 00:09:26.658 Zone Append Size Limit: 0 00:09:26.658 00:09:26.658 00:09:26.658 Active Namespaces 00:09:26.658 ================= 00:09:26.658 Namespace ID:1 00:09:26.658 Error Recovery Timeout: Unlimited 00:09:26.658 Command Set Identifier: NVM (00h) 00:09:26.658 Deallocate: Supported 00:09:26.658 Deallocated/Unwritten Error: Supported 00:09:26.658 Deallocated Read Value: All 0x00 00:09:26.658 Deallocate in Write Zeroes: Not Supported 00:09:26.658 Deallocated Guard Field: 0xFFFF 00:09:26.658 Flush: Supported 00:09:26.658 Reservation: Not Supported 00:09:26.658 Metadata Transferred as: Separate Metadata Buffer 00:09:26.658 Namespace Sharing Capabilities: Private 00:09:26.658 Size (in LBAs): 1548666 (5GiB) 00:09:26.658 Capacity (in LBAs): 1548666 (5GiB) 00:09:26.658 Utilization (in LBAs): 1548666 (5GiB) 00:09:26.658 Thin Provisioning: Not Supported 00:09:26.658 Per-NS Atomic Units: No 00:09:26.658 Maximum Single Source Range Length: 128 00:09:26.658 Maximum Copy Length: 128 00:09:26.658 Maximum Source Range Count: 128 00:09:26.658 NGUID/EUI64 Never Reused: No 00:09:26.658 Namespace Write Protected: No 00:09:26.658 Number of LBA Formats: 8 00:09:26.658 Current LBA Format: LBA Format #07 00:09:26.658 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:26.658 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:26.658 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:26.658 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:26.658 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:26.658 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:26.658 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:26.658 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:26.658 00:09:26.658 NVM Specific Namespace Data 00:09:26.658 =========================== 00:09:26.658 Logical Block Storage Tag Mask: 0 00:09:26.658 Protection Information Capabilities: 00:09:26.658 16b Guard Protection Information Storage Tag Support: No 00:09:26.658 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:26.658 Storage Tag Check Read Support: No 00:09:26.658 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.658 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.658 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.658 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.658 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.658 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.658 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.658 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.658 15:09:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:26.658 15:09:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:26.917 ===================================================== 00:09:26.917 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:26.917 ===================================================== 00:09:26.917 Controller Capabilities/Features 00:09:26.917 ================================ 00:09:26.917 Vendor ID: 1b36 00:09:26.917 Subsystem Vendor ID: 1af4 00:09:26.917 Serial Number: 12341 00:09:26.917 Model Number: QEMU NVMe Ctrl 00:09:26.917 Firmware Version: 8.0.0 00:09:26.917 Recommended Arb Burst: 6 00:09:26.917 IEEE OUI Identifier: 00 54 52 00:09:26.917 Multi-path I/O 00:09:26.917 May have multiple subsystem ports: No 00:09:26.917 May have multiple controllers: No 00:09:26.917 Associated with SR-IOV VF: No 00:09:26.917 Max Data Transfer Size: 524288 00:09:26.917 Max Number of Namespaces: 256 00:09:26.917 Max Number of I/O Queues: 64 00:09:26.917 NVMe Specification Version (VS): 1.4 00:09:26.917 NVMe Specification Version (Identify): 1.4 00:09:26.917 Maximum Queue Entries: 2048 00:09:26.917 Contiguous Queues Required: Yes 00:09:26.917 Arbitration Mechanisms Supported 00:09:26.917 Weighted Round Robin: Not Supported 00:09:26.917 Vendor Specific: Not Supported 00:09:26.917 Reset Timeout: 7500 ms 00:09:26.917 Doorbell Stride: 4 bytes 00:09:26.917 NVM Subsystem Reset: Not Supported 00:09:26.917 Command Sets Supported 00:09:26.917 NVM Command Set: Supported 00:09:26.917 Boot Partition: Not Supported 00:09:26.917 Memory Page Size Minimum: 4096 bytes 00:09:26.917 Memory Page Size Maximum: 65536 bytes 00:09:26.917 Persistent Memory Region: Not Supported 00:09:26.917 Optional Asynchronous Events Supported 00:09:26.917 Namespace Attribute Notices: Supported 00:09:26.917 Firmware Activation Notices: Not Supported 00:09:26.917 ANA Change Notices: Not Supported 00:09:26.917 PLE Aggregate Log Change Notices: Not Supported 00:09:26.917 LBA Status Info Alert Notices: Not Supported 00:09:26.917 EGE Aggregate Log Change Notices: Not Supported 00:09:26.917 Normal NVM Subsystem Shutdown event: Not Supported 00:09:26.917 Zone Descriptor Change Notices: Not Supported 00:09:26.917 Discovery Log Change Notices: Not Supported 00:09:26.917 Controller Attributes 00:09:26.917 128-bit Host Identifier: Not Supported 00:09:26.918 Non-Operational Permissive Mode: Not Supported 00:09:26.918 NVM Sets: Not Supported 00:09:26.918 Read Recovery Levels: Not Supported 00:09:26.918 Endurance Groups: Not Supported 00:09:26.918 Predictable Latency Mode: Not Supported 00:09:26.918 Traffic Based Keep ALive: Not Supported 00:09:26.918 Namespace Granularity: Not Supported 00:09:26.918 SQ Associations: Not Supported 00:09:26.918 UUID List: Not Supported 00:09:26.918 Multi-Domain Subsystem: Not Supported 00:09:26.918 Fixed Capacity Management: Not Supported 00:09:26.918 Variable Capacity Management: Not Supported 00:09:26.918 Delete Endurance Group: Not Supported 00:09:26.918 Delete NVM Set: Not Supported 00:09:26.918 Extended LBA Formats Supported: Supported 00:09:26.918 Flexible Data Placement Supported: Not Supported 00:09:26.918 00:09:26.918 Controller Memory Buffer Support 00:09:26.918 ================================ 00:09:26.918 Supported: No 00:09:26.918 00:09:26.918 Persistent Memory Region Support 00:09:26.918 ================================ 00:09:26.918 Supported: No 00:09:26.918 00:09:26.918 Admin Command Set Attributes 00:09:26.918 ============================ 00:09:26.918 Security Send/Receive: Not Supported 00:09:26.918 Format NVM: Supported 00:09:26.918 Firmware Activate/Download: Not Supported 00:09:26.918 Namespace Management: Supported 00:09:26.918 Device Self-Test: Not Supported 00:09:26.918 Directives: Supported 00:09:26.918 NVMe-MI: Not Supported 00:09:26.918 Virtualization Management: Not Supported 00:09:26.918 Doorbell Buffer Config: Supported 00:09:26.918 Get LBA Status Capability: Not Supported 00:09:26.918 Command & Feature Lockdown Capability: Not Supported 00:09:26.918 Abort Command Limit: 4 00:09:26.918 Async Event Request Limit: 4 00:09:26.918 Number of Firmware Slots: N/A 00:09:26.918 Firmware Slot 1 Read-Only: N/A 00:09:26.918 Firmware Activation Without Reset: N/A 00:09:26.918 Multiple Update Detection Support: N/A 00:09:26.918 Firmware Update Granularity: No Information Provided 00:09:26.918 Per-Namespace SMART Log: Yes 00:09:26.918 Asymmetric Namespace Access Log Page: Not Supported 00:09:26.918 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:26.918 Command Effects Log Page: Supported 00:09:26.918 Get Log Page Extended Data: Supported 00:09:26.918 Telemetry Log Pages: Not Supported 00:09:26.918 Persistent Event Log Pages: Not Supported 00:09:26.918 Supported Log Pages Log Page: May Support 00:09:26.918 Commands Supported & Effects Log Page: Not Supported 00:09:26.918 Feature Identifiers & Effects Log Page:May Support 00:09:26.918 NVMe-MI Commands & Effects Log Page: May Support 00:09:26.918 Data Area 4 for Telemetry Log: Not Supported 00:09:26.918 Error Log Page Entries Supported: 1 00:09:26.918 Keep Alive: Not Supported 00:09:26.918 00:09:26.918 NVM Command Set Attributes 00:09:26.918 ========================== 00:09:26.918 Submission Queue Entry Size 00:09:26.918 Max: 64 00:09:26.918 Min: 64 00:09:26.918 Completion Queue Entry Size 00:09:26.918 Max: 16 00:09:26.918 Min: 16 00:09:26.918 Number of Namespaces: 256 00:09:26.918 Compare Command: Supported 00:09:26.918 Write Uncorrectable Command: Not Supported 00:09:26.918 Dataset Management Command: Supported 00:09:26.918 Write Zeroes Command: Supported 00:09:26.918 Set Features Save Field: Supported 00:09:26.918 Reservations: Not Supported 00:09:26.918 Timestamp: Supported 00:09:26.918 Copy: Supported 00:09:26.918 Volatile Write Cache: Present 00:09:26.918 Atomic Write Unit (Normal): 1 00:09:26.918 Atomic Write Unit (PFail): 1 00:09:26.918 Atomic Compare & Write Unit: 1 00:09:26.918 Fused Compare & Write: Not Supported 00:09:26.918 Scatter-Gather List 00:09:26.918 SGL Command Set: Supported 00:09:26.918 SGL Keyed: Not Supported 00:09:26.918 SGL Bit Bucket Descriptor: Not Supported 00:09:26.918 SGL Metadata Pointer: Not Supported 00:09:26.918 Oversized SGL: Not Supported 00:09:26.918 SGL Metadata Address: Not Supported 00:09:26.918 SGL Offset: Not Supported 00:09:26.918 Transport SGL Data Block: Not Supported 00:09:26.918 Replay Protected Memory Block: Not Supported 00:09:26.918 00:09:26.918 Firmware Slot Information 00:09:26.918 ========================= 00:09:26.918 Active slot: 1 00:09:26.918 Slot 1 Firmware Revision: 1.0 00:09:26.918 00:09:26.918 00:09:26.918 Commands Supported and Effects 00:09:26.918 ============================== 00:09:26.918 Admin Commands 00:09:26.918 -------------- 00:09:26.918 Delete I/O Submission Queue (00h): Supported 00:09:26.918 Create I/O Submission Queue (01h): Supported 00:09:26.918 Get Log Page (02h): Supported 00:09:26.918 Delete I/O Completion Queue (04h): Supported 00:09:26.918 Create I/O Completion Queue (05h): Supported 00:09:26.918 Identify (06h): Supported 00:09:26.918 Abort (08h): Supported 00:09:26.918 Set Features (09h): Supported 00:09:26.918 Get Features (0Ah): Supported 00:09:26.918 Asynchronous Event Request (0Ch): Supported 00:09:26.918 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:26.918 Directive Send (19h): Supported 00:09:26.918 Directive Receive (1Ah): Supported 00:09:26.918 Virtualization Management (1Ch): Supported 00:09:26.918 Doorbell Buffer Config (7Ch): Supported 00:09:26.918 Format NVM (80h): Supported LBA-Change 00:09:26.918 I/O Commands 00:09:26.918 ------------ 00:09:26.918 Flush (00h): Supported LBA-Change 00:09:26.918 Write (01h): Supported LBA-Change 00:09:26.918 Read (02h): Supported 00:09:26.918 Compare (05h): Supported 00:09:26.918 Write Zeroes (08h): Supported LBA-Change 00:09:26.918 Dataset Management (09h): Supported LBA-Change 00:09:26.918 Unknown (0Ch): Supported 00:09:26.918 Unknown (12h): Supported 00:09:26.918 Copy (19h): Supported LBA-Change 00:09:26.918 Unknown (1Dh): Supported LBA-Change 00:09:26.918 00:09:26.918 Error Log 00:09:26.918 ========= 00:09:26.918 00:09:26.918 Arbitration 00:09:26.918 =========== 00:09:26.918 Arbitration Burst: no limit 00:09:26.918 00:09:26.918 Power Management 00:09:26.918 ================ 00:09:26.918 Number of Power States: 1 00:09:26.918 Current Power State: Power State #0 00:09:26.918 Power State #0: 00:09:26.919 Max Power: 25.00 W 00:09:26.919 Non-Operational State: Operational 00:09:26.919 Entry Latency: 16 microseconds 00:09:26.919 Exit Latency: 4 microseconds 00:09:26.919 Relative Read Throughput: 0 00:09:26.919 Relative Read Latency: 0 00:09:26.919 Relative Write Throughput: 0 00:09:26.919 Relative Write Latency: 0 00:09:26.919 Idle Power: Not Reported 00:09:26.919 Active Power: Not Reported 00:09:26.919 Non-Operational Permissive Mode: Not Supported 00:09:26.919 00:09:26.919 Health Information 00:09:26.919 ================== 00:09:26.919 Critical Warnings: 00:09:26.919 Available Spare Space: OK 00:09:26.919 Temperature: OK 00:09:26.919 Device Reliability: OK 00:09:26.919 Read Only: No 00:09:26.919 Volatile Memory Backup: OK 00:09:26.919 Current Temperature: 323 Kelvin (50 Celsius) 00:09:26.919 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:26.919 Available Spare: 0% 00:09:26.919 Available Spare Threshold: 0% 00:09:26.919 Life Percentage Used: 0% 00:09:26.919 Data Units Read: 1143 00:09:26.919 Data Units Written: 1008 00:09:26.919 Host Read Commands: 51635 00:09:26.919 Host Write Commands: 50367 00:09:26.919 Controller Busy Time: 0 minutes 00:09:26.919 Power Cycles: 0 00:09:26.919 Power On Hours: 0 hours 00:09:26.919 Unsafe Shutdowns: 0 00:09:26.919 Unrecoverable Media Errors: 0 00:09:26.919 Lifetime Error Log Entries: 0 00:09:26.919 Warning Temperature Time: 0 minutes 00:09:26.919 Critical Temperature Time: 0 minutes 00:09:26.919 00:09:26.919 Number of Queues 00:09:26.919 ================ 00:09:26.919 Number of I/O Submission Queues: 64 00:09:26.919 Number of I/O Completion Queues: 64 00:09:26.919 00:09:26.919 ZNS Specific Controller Data 00:09:26.919 ============================ 00:09:26.919 Zone Append Size Limit: 0 00:09:26.919 00:09:26.919 00:09:26.919 Active Namespaces 00:09:26.919 ================= 00:09:26.919 Namespace ID:1 00:09:26.919 Error Recovery Timeout: Unlimited 00:09:26.919 Command Set Identifier: NVM (00h) 00:09:26.919 Deallocate: Supported 00:09:26.919 Deallocated/Unwritten Error: Supported 00:09:26.919 Deallocated Read Value: All 0x00 00:09:26.919 Deallocate in Write Zeroes: Not Supported 00:09:26.919 Deallocated Guard Field: 0xFFFF 00:09:26.919 Flush: Supported 00:09:26.919 Reservation: Not Supported 00:09:26.919 Namespace Sharing Capabilities: Private 00:09:26.919 Size (in LBAs): 1310720 (5GiB) 00:09:26.919 Capacity (in LBAs): 1310720 (5GiB) 00:09:26.919 Utilization (in LBAs): 1310720 (5GiB) 00:09:26.919 Thin Provisioning: Not Supported 00:09:26.919 Per-NS Atomic Units: No 00:09:26.919 Maximum Single Source Range Length: 128 00:09:26.919 Maximum Copy Length: 128 00:09:26.919 Maximum Source Range Count: 128 00:09:26.919 NGUID/EUI64 Never Reused: No 00:09:26.919 Namespace Write Protected: No 00:09:26.919 Number of LBA Formats: 8 00:09:26.919 Current LBA Format: LBA Format #04 00:09:26.919 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:26.919 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:26.919 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:26.919 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:26.919 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:26.919 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:26.919 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:26.919 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:26.919 00:09:26.919 NVM Specific Namespace Data 00:09:26.919 =========================== 00:09:26.919 Logical Block Storage Tag Mask: 0 00:09:26.919 Protection Information Capabilities: 00:09:26.919 16b Guard Protection Information Storage Tag Support: No 00:09:26.919 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:26.919 Storage Tag Check Read Support: No 00:09:26.919 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.919 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.919 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.919 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.919 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.919 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.919 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.919 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:26.919 15:09:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:26.919 15:09:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:27.179 ===================================================== 00:09:27.179 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:27.179 ===================================================== 00:09:27.179 Controller Capabilities/Features 00:09:27.179 ================================ 00:09:27.179 Vendor ID: 1b36 00:09:27.179 Subsystem Vendor ID: 1af4 00:09:27.179 Serial Number: 12342 00:09:27.179 Model Number: QEMU NVMe Ctrl 00:09:27.179 Firmware Version: 8.0.0 00:09:27.179 Recommended Arb Burst: 6 00:09:27.179 IEEE OUI Identifier: 00 54 52 00:09:27.179 Multi-path I/O 00:09:27.179 May have multiple subsystem ports: No 00:09:27.179 May have multiple controllers: No 00:09:27.179 Associated with SR-IOV VF: No 00:09:27.179 Max Data Transfer Size: 524288 00:09:27.179 Max Number of Namespaces: 256 00:09:27.179 Max Number of I/O Queues: 64 00:09:27.179 NVMe Specification Version (VS): 1.4 00:09:27.179 NVMe Specification Version (Identify): 1.4 00:09:27.179 Maximum Queue Entries: 2048 00:09:27.179 Contiguous Queues Required: Yes 00:09:27.179 Arbitration Mechanisms Supported 00:09:27.179 Weighted Round Robin: Not Supported 00:09:27.180 Vendor Specific: Not Supported 00:09:27.180 Reset Timeout: 7500 ms 00:09:27.180 Doorbell Stride: 4 bytes 00:09:27.180 NVM Subsystem Reset: Not Supported 00:09:27.180 Command Sets Supported 00:09:27.180 NVM Command Set: Supported 00:09:27.180 Boot Partition: Not Supported 00:09:27.180 Memory Page Size Minimum: 4096 bytes 00:09:27.180 Memory Page Size Maximum: 65536 bytes 00:09:27.180 Persistent Memory Region: Not Supported 00:09:27.180 Optional Asynchronous Events Supported 00:09:27.180 Namespace Attribute Notices: Supported 00:09:27.180 Firmware Activation Notices: Not Supported 00:09:27.180 ANA Change Notices: Not Supported 00:09:27.180 PLE Aggregate Log Change Notices: Not Supported 00:09:27.180 LBA Status Info Alert Notices: Not Supported 00:09:27.180 EGE Aggregate Log Change Notices: Not Supported 00:09:27.180 Normal NVM Subsystem Shutdown event: Not Supported 00:09:27.180 Zone Descriptor Change Notices: Not Supported 00:09:27.180 Discovery Log Change Notices: Not Supported 00:09:27.180 Controller Attributes 00:09:27.180 128-bit Host Identifier: Not Supported 00:09:27.180 Non-Operational Permissive Mode: Not Supported 00:09:27.180 NVM Sets: Not Supported 00:09:27.180 Read Recovery Levels: Not Supported 00:09:27.180 Endurance Groups: Not Supported 00:09:27.180 Predictable Latency Mode: Not Supported 00:09:27.180 Traffic Based Keep ALive: Not Supported 00:09:27.180 Namespace Granularity: Not Supported 00:09:27.180 SQ Associations: Not Supported 00:09:27.180 UUID List: Not Supported 00:09:27.180 Multi-Domain Subsystem: Not Supported 00:09:27.180 Fixed Capacity Management: Not Supported 00:09:27.180 Variable Capacity Management: Not Supported 00:09:27.180 Delete Endurance Group: Not Supported 00:09:27.180 Delete NVM Set: Not Supported 00:09:27.180 Extended LBA Formats Supported: Supported 00:09:27.180 Flexible Data Placement Supported: Not Supported 00:09:27.180 00:09:27.180 Controller Memory Buffer Support 00:09:27.180 ================================ 00:09:27.180 Supported: No 00:09:27.180 00:09:27.180 Persistent Memory Region Support 00:09:27.180 ================================ 00:09:27.180 Supported: No 00:09:27.180 00:09:27.180 Admin Command Set Attributes 00:09:27.180 ============================ 00:09:27.180 Security Send/Receive: Not Supported 00:09:27.180 Format NVM: Supported 00:09:27.180 Firmware Activate/Download: Not Supported 00:09:27.180 Namespace Management: Supported 00:09:27.180 Device Self-Test: Not Supported 00:09:27.180 Directives: Supported 00:09:27.180 NVMe-MI: Not Supported 00:09:27.180 Virtualization Management: Not Supported 00:09:27.180 Doorbell Buffer Config: Supported 00:09:27.180 Get LBA Status Capability: Not Supported 00:09:27.180 Command & Feature Lockdown Capability: Not Supported 00:09:27.180 Abort Command Limit: 4 00:09:27.180 Async Event Request Limit: 4 00:09:27.180 Number of Firmware Slots: N/A 00:09:27.180 Firmware Slot 1 Read-Only: N/A 00:09:27.180 Firmware Activation Without Reset: N/A 00:09:27.180 Multiple Update Detection Support: N/A 00:09:27.180 Firmware Update Granularity: No Information Provided 00:09:27.180 Per-Namespace SMART Log: Yes 00:09:27.180 Asymmetric Namespace Access Log Page: Not Supported 00:09:27.180 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:27.180 Command Effects Log Page: Supported 00:09:27.180 Get Log Page Extended Data: Supported 00:09:27.180 Telemetry Log Pages: Not Supported 00:09:27.180 Persistent Event Log Pages: Not Supported 00:09:27.180 Supported Log Pages Log Page: May Support 00:09:27.180 Commands Supported & Effects Log Page: Not Supported 00:09:27.180 Feature Identifiers & Effects Log Page:May Support 00:09:27.180 NVMe-MI Commands & Effects Log Page: May Support 00:09:27.180 Data Area 4 for Telemetry Log: Not Supported 00:09:27.180 Error Log Page Entries Supported: 1 00:09:27.180 Keep Alive: Not Supported 00:09:27.180 00:09:27.180 NVM Command Set Attributes 00:09:27.180 ========================== 00:09:27.180 Submission Queue Entry Size 00:09:27.180 Max: 64 00:09:27.180 Min: 64 00:09:27.180 Completion Queue Entry Size 00:09:27.180 Max: 16 00:09:27.180 Min: 16 00:09:27.180 Number of Namespaces: 256 00:09:27.180 Compare Command: Supported 00:09:27.180 Write Uncorrectable Command: Not Supported 00:09:27.180 Dataset Management Command: Supported 00:09:27.180 Write Zeroes Command: Supported 00:09:27.180 Set Features Save Field: Supported 00:09:27.180 Reservations: Not Supported 00:09:27.180 Timestamp: Supported 00:09:27.180 Copy: Supported 00:09:27.180 Volatile Write Cache: Present 00:09:27.180 Atomic Write Unit (Normal): 1 00:09:27.180 Atomic Write Unit (PFail): 1 00:09:27.180 Atomic Compare & Write Unit: 1 00:09:27.180 Fused Compare & Write: Not Supported 00:09:27.180 Scatter-Gather List 00:09:27.180 SGL Command Set: Supported 00:09:27.180 SGL Keyed: Not Supported 00:09:27.180 SGL Bit Bucket Descriptor: Not Supported 00:09:27.180 SGL Metadata Pointer: Not Supported 00:09:27.180 Oversized SGL: Not Supported 00:09:27.180 SGL Metadata Address: Not Supported 00:09:27.180 SGL Offset: Not Supported 00:09:27.180 Transport SGL Data Block: Not Supported 00:09:27.180 Replay Protected Memory Block: Not Supported 00:09:27.180 00:09:27.180 Firmware Slot Information 00:09:27.180 ========================= 00:09:27.180 Active slot: 1 00:09:27.180 Slot 1 Firmware Revision: 1.0 00:09:27.180 00:09:27.180 00:09:27.180 Commands Supported and Effects 00:09:27.180 ============================== 00:09:27.180 Admin Commands 00:09:27.180 -------------- 00:09:27.180 Delete I/O Submission Queue (00h): Supported 00:09:27.180 Create I/O Submission Queue (01h): Supported 00:09:27.180 Get Log Page (02h): Supported 00:09:27.180 Delete I/O Completion Queue (04h): Supported 00:09:27.180 Create I/O Completion Queue (05h): Supported 00:09:27.180 Identify (06h): Supported 00:09:27.180 Abort (08h): Supported 00:09:27.180 Set Features (09h): Supported 00:09:27.180 Get Features (0Ah): Supported 00:09:27.180 Asynchronous Event Request (0Ch): Supported 00:09:27.180 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:27.180 Directive Send (19h): Supported 00:09:27.180 Directive Receive (1Ah): Supported 00:09:27.180 Virtualization Management (1Ch): Supported 00:09:27.180 Doorbell Buffer Config (7Ch): Supported 00:09:27.180 Format NVM (80h): Supported LBA-Change 00:09:27.180 I/O Commands 00:09:27.180 ------------ 00:09:27.180 Flush (00h): Supported LBA-Change 00:09:27.180 Write (01h): Supported LBA-Change 00:09:27.180 Read (02h): Supported 00:09:27.180 Compare (05h): Supported 00:09:27.180 Write Zeroes (08h): Supported LBA-Change 00:09:27.180 Dataset Management (09h): Supported LBA-Change 00:09:27.180 Unknown (0Ch): Supported 00:09:27.180 Unknown (12h): Supported 00:09:27.180 Copy (19h): Supported LBA-Change 00:09:27.180 Unknown (1Dh): Supported LBA-Change 00:09:27.180 00:09:27.180 Error Log 00:09:27.180 ========= 00:09:27.180 00:09:27.180 Arbitration 00:09:27.180 =========== 00:09:27.180 Arbitration Burst: no limit 00:09:27.180 00:09:27.180 Power Management 00:09:27.180 ================ 00:09:27.180 Number of Power States: 1 00:09:27.180 Current Power State: Power State #0 00:09:27.180 Power State #0: 00:09:27.180 Max Power: 25.00 W 00:09:27.180 Non-Operational State: Operational 00:09:27.180 Entry Latency: 16 microseconds 00:09:27.180 Exit Latency: 4 microseconds 00:09:27.180 Relative Read Throughput: 0 00:09:27.180 Relative Read Latency: 0 00:09:27.180 Relative Write Throughput: 0 00:09:27.180 Relative Write Latency: 0 00:09:27.180 Idle Power: Not Reported 00:09:27.180 Active Power: Not Reported 00:09:27.180 Non-Operational Permissive Mode: Not Supported 00:09:27.180 00:09:27.180 Health Information 00:09:27.180 ================== 00:09:27.180 Critical Warnings: 00:09:27.180 Available Spare Space: OK 00:09:27.180 Temperature: OK 00:09:27.180 Device Reliability: OK 00:09:27.180 Read Only: No 00:09:27.180 Volatile Memory Backup: OK 00:09:27.180 Current Temperature: 323 Kelvin (50 Celsius) 00:09:27.180 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:27.180 Available Spare: 0% 00:09:27.180 Available Spare Threshold: 0% 00:09:27.180 Life Percentage Used: 0% 00:09:27.180 Data Units Read: 2349 00:09:27.180 Data Units Written: 2136 00:09:27.180 Host Read Commands: 104932 00:09:27.180 Host Write Commands: 103201 00:09:27.180 Controller Busy Time: 0 minutes 00:09:27.180 Power Cycles: 0 00:09:27.180 Power On Hours: 0 hours 00:09:27.180 Unsafe Shutdowns: 0 00:09:27.180 Unrecoverable Media Errors: 0 00:09:27.180 Lifetime Error Log Entries: 0 00:09:27.180 Warning Temperature Time: 0 minutes 00:09:27.180 Critical Temperature Time: 0 minutes 00:09:27.180 00:09:27.180 Number of Queues 00:09:27.180 ================ 00:09:27.180 Number of I/O Submission Queues: 64 00:09:27.181 Number of I/O Completion Queues: 64 00:09:27.181 00:09:27.181 ZNS Specific Controller Data 00:09:27.181 ============================ 00:09:27.181 Zone Append Size Limit: 0 00:09:27.181 00:09:27.181 00:09:27.181 Active Namespaces 00:09:27.181 ================= 00:09:27.181 Namespace ID:1 00:09:27.181 Error Recovery Timeout: Unlimited 00:09:27.181 Command Set Identifier: NVM (00h) 00:09:27.181 Deallocate: Supported 00:09:27.181 Deallocated/Unwritten Error: Supported 00:09:27.181 Deallocated Read Value: All 0x00 00:09:27.181 Deallocate in Write Zeroes: Not Supported 00:09:27.181 Deallocated Guard Field: 0xFFFF 00:09:27.181 Flush: Supported 00:09:27.181 Reservation: Not Supported 00:09:27.181 Namespace Sharing Capabilities: Private 00:09:27.181 Size (in LBAs): 1048576 (4GiB) 00:09:27.181 Capacity (in LBAs): 1048576 (4GiB) 00:09:27.181 Utilization (in LBAs): 1048576 (4GiB) 00:09:27.181 Thin Provisioning: Not Supported 00:09:27.181 Per-NS Atomic Units: No 00:09:27.181 Maximum Single Source Range Length: 128 00:09:27.181 Maximum Copy Length: 128 00:09:27.181 Maximum Source Range Count: 128 00:09:27.181 NGUID/EUI64 Never Reused: No 00:09:27.181 Namespace Write Protected: No 00:09:27.181 Number of LBA Formats: 8 00:09:27.181 Current LBA Format: LBA Format #04 00:09:27.181 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:27.181 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:27.181 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:27.181 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:27.181 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:27.181 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:27.181 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:27.181 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:27.181 00:09:27.181 NVM Specific Namespace Data 00:09:27.181 =========================== 00:09:27.181 Logical Block Storage Tag Mask: 0 00:09:27.181 Protection Information Capabilities: 00:09:27.181 16b Guard Protection Information Storage Tag Support: No 00:09:27.181 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:27.181 Storage Tag Check Read Support: No 00:09:27.181 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Namespace ID:2 00:09:27.181 Error Recovery Timeout: Unlimited 00:09:27.181 Command Set Identifier: NVM (00h) 00:09:27.181 Deallocate: Supported 00:09:27.181 Deallocated/Unwritten Error: Supported 00:09:27.181 Deallocated Read Value: All 0x00 00:09:27.181 Deallocate in Write Zeroes: Not Supported 00:09:27.181 Deallocated Guard Field: 0xFFFF 00:09:27.181 Flush: Supported 00:09:27.181 Reservation: Not Supported 00:09:27.181 Namespace Sharing Capabilities: Private 00:09:27.181 Size (in LBAs): 1048576 (4GiB) 00:09:27.181 Capacity (in LBAs): 1048576 (4GiB) 00:09:27.181 Utilization (in LBAs): 1048576 (4GiB) 00:09:27.181 Thin Provisioning: Not Supported 00:09:27.181 Per-NS Atomic Units: No 00:09:27.181 Maximum Single Source Range Length: 128 00:09:27.181 Maximum Copy Length: 128 00:09:27.181 Maximum Source Range Count: 128 00:09:27.181 NGUID/EUI64 Never Reused: No 00:09:27.181 Namespace Write Protected: No 00:09:27.181 Number of LBA Formats: 8 00:09:27.181 Current LBA Format: LBA Format #04 00:09:27.181 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:27.181 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:27.181 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:27.181 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:27.181 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:27.181 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:27.181 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:27.181 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:27.181 00:09:27.181 NVM Specific Namespace Data 00:09:27.181 =========================== 00:09:27.181 Logical Block Storage Tag Mask: 0 00:09:27.181 Protection Information Capabilities: 00:09:27.181 16b Guard Protection Information Storage Tag Support: No 00:09:27.181 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:27.181 Storage Tag Check Read Support: No 00:09:27.181 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Namespace ID:3 00:09:27.181 Error Recovery Timeout: Unlimited 00:09:27.181 Command Set Identifier: NVM (00h) 00:09:27.181 Deallocate: Supported 00:09:27.181 Deallocated/Unwritten Error: Supported 00:09:27.181 Deallocated Read Value: All 0x00 00:09:27.181 Deallocate in Write Zeroes: Not Supported 00:09:27.181 Deallocated Guard Field: 0xFFFF 00:09:27.181 Flush: Supported 00:09:27.181 Reservation: Not Supported 00:09:27.181 Namespace Sharing Capabilities: Private 00:09:27.181 Size (in LBAs): 1048576 (4GiB) 00:09:27.181 Capacity (in LBAs): 1048576 (4GiB) 00:09:27.181 Utilization (in LBAs): 1048576 (4GiB) 00:09:27.181 Thin Provisioning: Not Supported 00:09:27.181 Per-NS Atomic Units: No 00:09:27.181 Maximum Single Source Range Length: 128 00:09:27.181 Maximum Copy Length: 128 00:09:27.181 Maximum Source Range Count: 128 00:09:27.181 NGUID/EUI64 Never Reused: No 00:09:27.181 Namespace Write Protected: No 00:09:27.181 Number of LBA Formats: 8 00:09:27.181 Current LBA Format: LBA Format #04 00:09:27.181 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:27.181 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:27.181 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:27.181 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:27.181 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:27.181 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:27.181 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:27.181 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:27.181 00:09:27.181 NVM Specific Namespace Data 00:09:27.181 =========================== 00:09:27.181 Logical Block Storage Tag Mask: 0 00:09:27.181 Protection Information Capabilities: 00:09:27.181 16b Guard Protection Information Storage Tag Support: No 00:09:27.181 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:27.181 Storage Tag Check Read Support: No 00:09:27.181 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.181 15:09:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:27.181 15:09:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:27.441 ===================================================== 00:09:27.441 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:27.441 ===================================================== 00:09:27.441 Controller Capabilities/Features 00:09:27.441 ================================ 00:09:27.441 Vendor ID: 1b36 00:09:27.441 Subsystem Vendor ID: 1af4 00:09:27.441 Serial Number: 12343 00:09:27.441 Model Number: QEMU NVMe Ctrl 00:09:27.441 Firmware Version: 8.0.0 00:09:27.441 Recommended Arb Burst: 6 00:09:27.441 IEEE OUI Identifier: 00 54 52 00:09:27.441 Multi-path I/O 00:09:27.441 May have multiple subsystem ports: No 00:09:27.441 May have multiple controllers: Yes 00:09:27.441 Associated with SR-IOV VF: No 00:09:27.441 Max Data Transfer Size: 524288 00:09:27.441 Max Number of Namespaces: 256 00:09:27.441 Max Number of I/O Queues: 64 00:09:27.441 NVMe Specification Version (VS): 1.4 00:09:27.441 NVMe Specification Version (Identify): 1.4 00:09:27.441 Maximum Queue Entries: 2048 00:09:27.441 Contiguous Queues Required: Yes 00:09:27.441 Arbitration Mechanisms Supported 00:09:27.441 Weighted Round Robin: Not Supported 00:09:27.441 Vendor Specific: Not Supported 00:09:27.441 Reset Timeout: 7500 ms 00:09:27.441 Doorbell Stride: 4 bytes 00:09:27.441 NVM Subsystem Reset: Not Supported 00:09:27.441 Command Sets Supported 00:09:27.441 NVM Command Set: Supported 00:09:27.441 Boot Partition: Not Supported 00:09:27.441 Memory Page Size Minimum: 4096 bytes 00:09:27.441 Memory Page Size Maximum: 65536 bytes 00:09:27.441 Persistent Memory Region: Not Supported 00:09:27.441 Optional Asynchronous Events Supported 00:09:27.441 Namespace Attribute Notices: Supported 00:09:27.441 Firmware Activation Notices: Not Supported 00:09:27.441 ANA Change Notices: Not Supported 00:09:27.441 PLE Aggregate Log Change Notices: Not Supported 00:09:27.441 LBA Status Info Alert Notices: Not Supported 00:09:27.441 EGE Aggregate Log Change Notices: Not Supported 00:09:27.441 Normal NVM Subsystem Shutdown event: Not Supported 00:09:27.441 Zone Descriptor Change Notices: Not Supported 00:09:27.441 Discovery Log Change Notices: Not Supported 00:09:27.441 Controller Attributes 00:09:27.441 128-bit Host Identifier: Not Supported 00:09:27.441 Non-Operational Permissive Mode: Not Supported 00:09:27.441 NVM Sets: Not Supported 00:09:27.441 Read Recovery Levels: Not Supported 00:09:27.441 Endurance Groups: Supported 00:09:27.441 Predictable Latency Mode: Not Supported 00:09:27.441 Traffic Based Keep ALive: Not Supported 00:09:27.441 Namespace Granularity: Not Supported 00:09:27.441 SQ Associations: Not Supported 00:09:27.441 UUID List: Not Supported 00:09:27.441 Multi-Domain Subsystem: Not Supported 00:09:27.441 Fixed Capacity Management: Not Supported 00:09:27.441 Variable Capacity Management: Not Supported 00:09:27.441 Delete Endurance Group: Not Supported 00:09:27.441 Delete NVM Set: Not Supported 00:09:27.441 Extended LBA Formats Supported: Supported 00:09:27.441 Flexible Data Placement Supported: Supported 00:09:27.441 00:09:27.441 Controller Memory Buffer Support 00:09:27.441 ================================ 00:09:27.441 Supported: No 00:09:27.441 00:09:27.441 Persistent Memory Region Support 00:09:27.441 ================================ 00:09:27.441 Supported: No 00:09:27.441 00:09:27.441 Admin Command Set Attributes 00:09:27.441 ============================ 00:09:27.441 Security Send/Receive: Not Supported 00:09:27.441 Format NVM: Supported 00:09:27.441 Firmware Activate/Download: Not Supported 00:09:27.441 Namespace Management: Supported 00:09:27.441 Device Self-Test: Not Supported 00:09:27.441 Directives: Supported 00:09:27.441 NVMe-MI: Not Supported 00:09:27.441 Virtualization Management: Not Supported 00:09:27.441 Doorbell Buffer Config: Supported 00:09:27.441 Get LBA Status Capability: Not Supported 00:09:27.441 Command & Feature Lockdown Capability: Not Supported 00:09:27.441 Abort Command Limit: 4 00:09:27.441 Async Event Request Limit: 4 00:09:27.441 Number of Firmware Slots: N/A 00:09:27.441 Firmware Slot 1 Read-Only: N/A 00:09:27.441 Firmware Activation Without Reset: N/A 00:09:27.441 Multiple Update Detection Support: N/A 00:09:27.441 Firmware Update Granularity: No Information Provided 00:09:27.441 Per-Namespace SMART Log: Yes 00:09:27.441 Asymmetric Namespace Access Log Page: Not Supported 00:09:27.441 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:27.441 Command Effects Log Page: Supported 00:09:27.441 Get Log Page Extended Data: Supported 00:09:27.441 Telemetry Log Pages: Not Supported 00:09:27.441 Persistent Event Log Pages: Not Supported 00:09:27.441 Supported Log Pages Log Page: May Support 00:09:27.441 Commands Supported & Effects Log Page: Not Supported 00:09:27.441 Feature Identifiers & Effects Log Page:May Support 00:09:27.441 NVMe-MI Commands & Effects Log Page: May Support 00:09:27.441 Data Area 4 for Telemetry Log: Not Supported 00:09:27.441 Error Log Page Entries Supported: 1 00:09:27.441 Keep Alive: Not Supported 00:09:27.441 00:09:27.441 NVM Command Set Attributes 00:09:27.441 ========================== 00:09:27.441 Submission Queue Entry Size 00:09:27.441 Max: 64 00:09:27.441 Min: 64 00:09:27.441 Completion Queue Entry Size 00:09:27.441 Max: 16 00:09:27.441 Min: 16 00:09:27.441 Number of Namespaces: 256 00:09:27.441 Compare Command: Supported 00:09:27.441 Write Uncorrectable Command: Not Supported 00:09:27.441 Dataset Management Command: Supported 00:09:27.441 Write Zeroes Command: Supported 00:09:27.441 Set Features Save Field: Supported 00:09:27.441 Reservations: Not Supported 00:09:27.441 Timestamp: Supported 00:09:27.441 Copy: Supported 00:09:27.441 Volatile Write Cache: Present 00:09:27.441 Atomic Write Unit (Normal): 1 00:09:27.441 Atomic Write Unit (PFail): 1 00:09:27.441 Atomic Compare & Write Unit: 1 00:09:27.441 Fused Compare & Write: Not Supported 00:09:27.441 Scatter-Gather List 00:09:27.441 SGL Command Set: Supported 00:09:27.441 SGL Keyed: Not Supported 00:09:27.441 SGL Bit Bucket Descriptor: Not Supported 00:09:27.441 SGL Metadata Pointer: Not Supported 00:09:27.442 Oversized SGL: Not Supported 00:09:27.442 SGL Metadata Address: Not Supported 00:09:27.442 SGL Offset: Not Supported 00:09:27.442 Transport SGL Data Block: Not Supported 00:09:27.442 Replay Protected Memory Block: Not Supported 00:09:27.442 00:09:27.442 Firmware Slot Information 00:09:27.442 ========================= 00:09:27.442 Active slot: 1 00:09:27.442 Slot 1 Firmware Revision: 1.0 00:09:27.442 00:09:27.442 00:09:27.442 Commands Supported and Effects 00:09:27.442 ============================== 00:09:27.442 Admin Commands 00:09:27.442 -------------- 00:09:27.442 Delete I/O Submission Queue (00h): Supported 00:09:27.442 Create I/O Submission Queue (01h): Supported 00:09:27.442 Get Log Page (02h): Supported 00:09:27.442 Delete I/O Completion Queue (04h): Supported 00:09:27.442 Create I/O Completion Queue (05h): Supported 00:09:27.442 Identify (06h): Supported 00:09:27.442 Abort (08h): Supported 00:09:27.442 Set Features (09h): Supported 00:09:27.442 Get Features (0Ah): Supported 00:09:27.442 Asynchronous Event Request (0Ch): Supported 00:09:27.442 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:27.442 Directive Send (19h): Supported 00:09:27.442 Directive Receive (1Ah): Supported 00:09:27.442 Virtualization Management (1Ch): Supported 00:09:27.442 Doorbell Buffer Config (7Ch): Supported 00:09:27.442 Format NVM (80h): Supported LBA-Change 00:09:27.442 I/O Commands 00:09:27.442 ------------ 00:09:27.442 Flush (00h): Supported LBA-Change 00:09:27.442 Write (01h): Supported LBA-Change 00:09:27.442 Read (02h): Supported 00:09:27.442 Compare (05h): Supported 00:09:27.442 Write Zeroes (08h): Supported LBA-Change 00:09:27.442 Dataset Management (09h): Supported LBA-Change 00:09:27.442 Unknown (0Ch): Supported 00:09:27.442 Unknown (12h): Supported 00:09:27.442 Copy (19h): Supported LBA-Change 00:09:27.442 Unknown (1Dh): Supported LBA-Change 00:09:27.442 00:09:27.442 Error Log 00:09:27.442 ========= 00:09:27.442 00:09:27.442 Arbitration 00:09:27.442 =========== 00:09:27.442 Arbitration Burst: no limit 00:09:27.442 00:09:27.442 Power Management 00:09:27.442 ================ 00:09:27.442 Number of Power States: 1 00:09:27.442 Current Power State: Power State #0 00:09:27.442 Power State #0: 00:09:27.442 Max Power: 25.00 W 00:09:27.442 Non-Operational State: Operational 00:09:27.442 Entry Latency: 16 microseconds 00:09:27.442 Exit Latency: 4 microseconds 00:09:27.442 Relative Read Throughput: 0 00:09:27.442 Relative Read Latency: 0 00:09:27.442 Relative Write Throughput: 0 00:09:27.442 Relative Write Latency: 0 00:09:27.442 Idle Power: Not Reported 00:09:27.442 Active Power: Not Reported 00:09:27.442 Non-Operational Permissive Mode: Not Supported 00:09:27.442 00:09:27.442 Health Information 00:09:27.442 ================== 00:09:27.442 Critical Warnings: 00:09:27.442 Available Spare Space: OK 00:09:27.442 Temperature: OK 00:09:27.442 Device Reliability: OK 00:09:27.442 Read Only: No 00:09:27.442 Volatile Memory Backup: OK 00:09:27.442 Current Temperature: 323 Kelvin (50 Celsius) 00:09:27.442 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:27.442 Available Spare: 0% 00:09:27.442 Available Spare Threshold: 0% 00:09:27.442 Life Percentage Used: 0% 00:09:27.442 Data Units Read: 852 00:09:27.442 Data Units Written: 781 00:09:27.442 Host Read Commands: 35650 00:09:27.442 Host Write Commands: 35073 00:09:27.442 Controller Busy Time: 0 minutes 00:09:27.442 Power Cycles: 0 00:09:27.442 Power On Hours: 0 hours 00:09:27.442 Unsafe Shutdowns: 0 00:09:27.442 Unrecoverable Media Errors: 0 00:09:27.442 Lifetime Error Log Entries: 0 00:09:27.442 Warning Temperature Time: 0 minutes 00:09:27.442 Critical Temperature Time: 0 minutes 00:09:27.442 00:09:27.442 Number of Queues 00:09:27.442 ================ 00:09:27.442 Number of I/O Submission Queues: 64 00:09:27.442 Number of I/O Completion Queues: 64 00:09:27.442 00:09:27.442 ZNS Specific Controller Data 00:09:27.442 ============================ 00:09:27.442 Zone Append Size Limit: 0 00:09:27.442 00:09:27.442 00:09:27.442 Active Namespaces 00:09:27.442 ================= 00:09:27.442 Namespace ID:1 00:09:27.442 Error Recovery Timeout: Unlimited 00:09:27.442 Command Set Identifier: NVM (00h) 00:09:27.442 Deallocate: Supported 00:09:27.442 Deallocated/Unwritten Error: Supported 00:09:27.442 Deallocated Read Value: All 0x00 00:09:27.442 Deallocate in Write Zeroes: Not Supported 00:09:27.442 Deallocated Guard Field: 0xFFFF 00:09:27.442 Flush: Supported 00:09:27.442 Reservation: Not Supported 00:09:27.442 Namespace Sharing Capabilities: Multiple Controllers 00:09:27.442 Size (in LBAs): 262144 (1GiB) 00:09:27.442 Capacity (in LBAs): 262144 (1GiB) 00:09:27.442 Utilization (in LBAs): 262144 (1GiB) 00:09:27.442 Thin Provisioning: Not Supported 00:09:27.442 Per-NS Atomic Units: No 00:09:27.442 Maximum Single Source Range Length: 128 00:09:27.442 Maximum Copy Length: 128 00:09:27.442 Maximum Source Range Count: 128 00:09:27.442 NGUID/EUI64 Never Reused: No 00:09:27.442 Namespace Write Protected: No 00:09:27.442 Endurance group ID: 1 00:09:27.442 Number of LBA Formats: 8 00:09:27.442 Current LBA Format: LBA Format #04 00:09:27.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:27.442 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:27.442 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:27.442 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:27.442 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:27.442 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:27.442 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:27.442 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:27.442 00:09:27.442 Get Feature FDP: 00:09:27.442 ================ 00:09:27.442 Enabled: Yes 00:09:27.442 FDP configuration index: 0 00:09:27.442 00:09:27.442 FDP configurations log page 00:09:27.442 =========================== 00:09:27.442 Number of FDP configurations: 1 00:09:27.442 Version: 0 00:09:27.442 Size: 112 00:09:27.442 FDP Configuration Descriptor: 0 00:09:27.442 Descriptor Size: 96 00:09:27.442 Reclaim Group Identifier format: 2 00:09:27.442 FDP Volatile Write Cache: Not Present 00:09:27.442 FDP Configuration: Valid 00:09:27.442 Vendor Specific Size: 0 00:09:27.442 Number of Reclaim Groups: 2 00:09:27.442 Number of Recalim Unit Handles: 8 00:09:27.442 Max Placement Identifiers: 128 00:09:27.442 Number of Namespaces Suppprted: 256 00:09:27.443 Reclaim unit Nominal Size: 6000000 bytes 00:09:27.443 Estimated Reclaim Unit Time Limit: Not Reported 00:09:27.443 RUH Desc #000: RUH Type: Initially Isolated 00:09:27.443 RUH Desc #001: RUH Type: Initially Isolated 00:09:27.443 RUH Desc #002: RUH Type: Initially Isolated 00:09:27.443 RUH Desc #003: RUH Type: Initially Isolated 00:09:27.443 RUH Desc #004: RUH Type: Initially Isolated 00:09:27.443 RUH Desc #005: RUH Type: Initially Isolated 00:09:27.443 RUH Desc #006: RUH Type: Initially Isolated 00:09:27.443 RUH Desc #007: RUH Type: Initially Isolated 00:09:27.443 00:09:27.443 FDP reclaim unit handle usage log page 00:09:27.443 ====================================== 00:09:27.443 Number of Reclaim Unit Handles: 8 00:09:27.443 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:27.443 RUH Usage Desc #001: RUH Attributes: Unused 00:09:27.443 RUH Usage Desc #002: RUH Attributes: Unused 00:09:27.443 RUH Usage Desc #003: RUH Attributes: Unused 00:09:27.443 RUH Usage Desc #004: RUH Attributes: Unused 00:09:27.443 RUH Usage Desc #005: RUH Attributes: Unused 00:09:27.443 RUH Usage Desc #006: RUH Attributes: Unused 00:09:27.443 RUH Usage Desc #007: RUH Attributes: Unused 00:09:27.443 00:09:27.443 FDP statistics log page 00:09:27.443 ======================= 00:09:27.443 Host bytes with metadata written: 500342784 00:09:27.443 Media bytes with metadata written: 500396032 00:09:27.443 Media bytes erased: 0 00:09:27.443 00:09:27.443 FDP events log page 00:09:27.443 =================== 00:09:27.443 Number of FDP events: 0 00:09:27.443 00:09:27.443 NVM Specific Namespace Data 00:09:27.443 =========================== 00:09:27.443 Logical Block Storage Tag Mask: 0 00:09:27.443 Protection Information Capabilities: 00:09:27.443 16b Guard Protection Information Storage Tag Support: No 00:09:27.443 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:27.443 Storage Tag Check Read Support: No 00:09:27.443 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.443 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.443 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.443 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.443 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.443 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.443 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.443 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:27.443 00:09:27.443 real 0m1.596s 00:09:27.443 user 0m0.568s 00:09:27.443 sys 0m0.822s 00:09:27.443 15:09:25 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.443 15:09:25 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:27.443 ************************************ 00:09:27.443 END TEST nvme_identify 00:09:27.443 ************************************ 00:09:27.443 15:09:25 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:27.443 15:09:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.443 15:09:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.443 15:09:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.443 ************************************ 00:09:27.443 START TEST nvme_perf 00:09:27.443 ************************************ 00:09:27.443 15:09:25 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:09:27.443 15:09:25 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:28.847 Initializing NVMe Controllers 00:09:28.847 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:28.847 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:28.847 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:28.847 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:28.847 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:28.847 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:28.847 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:28.847 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:28.847 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:28.847 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:28.847 Initialization complete. Launching workers. 00:09:28.847 ======================================================== 00:09:28.847 Latency(us) 00:09:28.847 Device Information : IOPS MiB/s Average min max 00:09:28.847 PCIE (0000:00:10.0) NSID 1 from core 0: 11291.26 132.32 11343.16 6378.16 40565.02 00:09:28.847 PCIE (0000:00:11.0) NSID 1 from core 0: 11291.26 132.32 11332.47 6358.44 39696.34 00:09:28.847 PCIE (0000:00:13.0) NSID 1 from core 0: 11291.26 132.32 11320.02 5654.93 39506.93 00:09:28.847 PCIE (0000:00:12.0) NSID 1 from core 0: 11291.26 132.32 11305.49 5406.78 38717.69 00:09:28.847 PCIE (0000:00:12.0) NSID 2 from core 0: 11291.26 132.32 11292.97 5220.64 37896.63 00:09:28.847 PCIE (0000:00:12.0) NSID 3 from core 0: 11291.26 132.32 11280.65 4993.89 37148.39 00:09:28.847 ======================================================== 00:09:28.847 Total : 67747.55 793.92 11312.46 4993.89 40565.02 00:09:28.847 00:09:28.847 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:28.847 ================================================================================= 00:09:28.847 1.00000% : 7790.625us 00:09:28.847 10.00000% : 8685.494us 00:09:28.847 25.00000% : 9422.445us 00:09:28.847 50.00000% : 10527.871us 00:09:28.847 75.00000% : 12054.413us 00:09:28.847 90.00000% : 14739.020us 00:09:28.847 95.00000% : 17055.152us 00:09:28.847 98.00000% : 20739.907us 00:09:28.847 99.00000% : 31794.172us 00:09:28.847 99.50000% : 39374.239us 00:09:28.847 99.90000% : 40427.027us 00:09:28.847 99.99000% : 40637.584us 00:09:28.847 99.99900% : 40637.584us 00:09:28.847 99.99990% : 40637.584us 00:09:28.847 99.99999% : 40637.584us 00:09:28.847 00:09:28.847 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:28.847 ================================================================================= 00:09:28.847 1.00000% : 7843.264us 00:09:28.847 10.00000% : 8685.494us 00:09:28.847 25.00000% : 9369.806us 00:09:28.847 50.00000% : 10527.871us 00:09:28.847 75.00000% : 12001.773us 00:09:28.847 90.00000% : 14844.299us 00:09:28.847 95.00000% : 17581.545us 00:09:28.847 98.00000% : 20950.464us 00:09:28.847 99.00000% : 32425.844us 00:09:28.847 99.50000% : 38532.010us 00:09:28.847 99.90000% : 39584.797us 00:09:28.847 99.99000% : 39795.354us 00:09:28.847 99.99900% : 39795.354us 00:09:28.847 99.99990% : 39795.354us 00:09:28.847 99.99999% : 39795.354us 00:09:28.847 00:09:28.847 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:28.847 ================================================================================= 00:09:28.847 1.00000% : 7737.986us 00:09:28.847 10.00000% : 8632.855us 00:09:28.847 25.00000% : 9369.806us 00:09:28.847 50.00000% : 10527.871us 00:09:28.847 75.00000% : 12054.413us 00:09:28.847 90.00000% : 14633.741us 00:09:28.847 95.00000% : 17055.152us 00:09:28.847 98.00000% : 20002.956us 00:09:28.847 99.00000% : 32215.287us 00:09:28.847 99.50000% : 38321.452us 00:09:28.847 99.90000% : 39374.239us 00:09:28.847 99.99000% : 39584.797us 00:09:28.847 99.99900% : 39584.797us 00:09:28.847 99.99990% : 39584.797us 00:09:28.847 99.99999% : 39584.797us 00:09:28.847 00:09:28.847 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:28.847 ================================================================================= 00:09:28.847 1.00000% : 7685.346us 00:09:28.847 10.00000% : 8632.855us 00:09:28.847 25.00000% : 9422.445us 00:09:28.847 50.00000% : 10475.232us 00:09:28.847 75.00000% : 12001.773us 00:09:28.847 90.00000% : 14739.020us 00:09:28.847 95.00000% : 17265.709us 00:09:28.847 98.00000% : 19371.284us 00:09:28.847 99.00000% : 31373.057us 00:09:28.847 99.50000% : 37268.665us 00:09:28.847 99.90000% : 38532.010us 00:09:28.847 99.99000% : 38742.567us 00:09:28.847 99.99900% : 38742.567us 00:09:28.847 99.99990% : 38742.567us 00:09:28.847 99.99999% : 38742.567us 00:09:28.847 00:09:28.847 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:28.847 ================================================================================= 00:09:28.847 1.00000% : 7580.067us 00:09:28.847 10.00000% : 8632.855us 00:09:28.847 25.00000% : 9422.445us 00:09:28.847 50.00000% : 10527.871us 00:09:28.847 75.00000% : 12054.413us 00:09:28.847 90.00000% : 14633.741us 00:09:28.847 95.00000% : 17160.431us 00:09:28.847 98.00000% : 19897.677us 00:09:28.847 99.00000% : 30741.385us 00:09:28.847 99.50000% : 36636.993us 00:09:28.847 99.90000% : 37689.780us 00:09:28.847 99.99000% : 37900.337us 00:09:28.848 99.99900% : 37900.337us 00:09:28.848 99.99990% : 37900.337us 00:09:28.848 99.99999% : 37900.337us 00:09:28.848 00:09:28.848 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:28.848 ================================================================================= 00:09:28.848 1.00000% : 7422.149us 00:09:28.848 10.00000% : 8632.855us 00:09:28.848 25.00000% : 9422.445us 00:09:28.848 50.00000% : 10527.871us 00:09:28.848 75.00000% : 12001.773us 00:09:28.848 90.00000% : 14633.741us 00:09:28.848 95.00000% : 16949.873us 00:09:28.848 98.00000% : 20424.071us 00:09:28.848 99.00000% : 30109.712us 00:09:28.848 99.50000% : 36005.320us 00:09:28.848 99.90000% : 37058.108us 00:09:28.848 99.99000% : 37268.665us 00:09:28.848 99.99900% : 37268.665us 00:09:28.848 99.99990% : 37268.665us 00:09:28.848 99.99999% : 37268.665us 00:09:28.848 00:09:28.848 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:28.848 ============================================================================== 00:09:28.848 Range in us Cumulative IO count 00:09:28.848 6369.362 - 6395.682: 0.0088% ( 1) 00:09:28.848 6395.682 - 6422.002: 0.0265% ( 2) 00:09:28.848 6422.002 - 6448.321: 0.0441% ( 2) 00:09:28.848 6474.641 - 6500.961: 0.0706% ( 3) 00:09:28.848 6500.961 - 6527.280: 0.0883% ( 2) 00:09:28.848 6527.280 - 6553.600: 0.0971% ( 1) 00:09:28.848 6553.600 - 6579.920: 0.1148% ( 2) 00:09:28.848 6579.920 - 6606.239: 0.1236% ( 1) 00:09:28.848 6606.239 - 6632.559: 0.1412% ( 2) 00:09:28.848 6632.559 - 6658.879: 0.1501% ( 1) 00:09:28.848 6658.879 - 6685.198: 0.1677% ( 2) 00:09:28.848 6685.198 - 6711.518: 0.1766% ( 1) 00:09:28.848 6711.518 - 6737.838: 0.1854% ( 1) 00:09:28.848 6737.838 - 6790.477: 0.2119% ( 3) 00:09:28.848 6790.477 - 6843.116: 0.2472% ( 4) 00:09:28.848 6843.116 - 6895.756: 0.2825% ( 4) 00:09:28.848 6895.756 - 6948.395: 0.3001% ( 2) 00:09:28.848 6948.395 - 7001.035: 0.3355% ( 4) 00:09:28.848 7001.035 - 7053.674: 0.3531% ( 2) 00:09:28.848 7053.674 - 7106.313: 0.3708% ( 2) 00:09:28.848 7106.313 - 7158.953: 0.4061% ( 4) 00:09:28.848 7158.953 - 7211.592: 0.4326% ( 3) 00:09:28.848 7211.592 - 7264.231: 0.4679% ( 4) 00:09:28.848 7264.231 - 7316.871: 0.4944% ( 3) 00:09:28.848 7316.871 - 7369.510: 0.5120% ( 2) 00:09:28.848 7369.510 - 7422.149: 0.5385% ( 3) 00:09:28.848 7422.149 - 7474.789: 0.5650% ( 3) 00:09:28.848 7474.789 - 7527.428: 0.5826% ( 2) 00:09:28.848 7527.428 - 7580.067: 0.6091% ( 3) 00:09:28.848 7580.067 - 7632.707: 0.7062% ( 11) 00:09:28.848 7632.707 - 7685.346: 0.8298% ( 14) 00:09:28.848 7685.346 - 7737.986: 0.9799% ( 17) 00:09:28.848 7737.986 - 7790.625: 1.1829% ( 23) 00:09:28.848 7790.625 - 7843.264: 1.4566% ( 31) 00:09:28.848 7843.264 - 7895.904: 1.7744% ( 36) 00:09:28.848 7895.904 - 7948.543: 2.0745% ( 34) 00:09:28.848 7948.543 - 8001.182: 2.4894% ( 47) 00:09:28.848 8001.182 - 8053.822: 2.9926% ( 57) 00:09:28.848 8053.822 - 8106.461: 3.5046% ( 58) 00:09:28.848 8106.461 - 8159.100: 3.9901% ( 55) 00:09:28.848 8159.100 - 8211.740: 4.5109% ( 59) 00:09:28.848 8211.740 - 8264.379: 5.1024% ( 67) 00:09:28.848 8264.379 - 8317.018: 5.6409% ( 61) 00:09:28.848 8317.018 - 8369.658: 6.2147% ( 65) 00:09:28.848 8369.658 - 8422.297: 6.8591% ( 73) 00:09:28.848 8422.297 - 8474.937: 7.4506% ( 67) 00:09:28.848 8474.937 - 8527.576: 8.1568% ( 80) 00:09:28.848 8527.576 - 8580.215: 8.9071% ( 85) 00:09:28.848 8580.215 - 8632.855: 9.7281% ( 93) 00:09:28.848 8632.855 - 8685.494: 10.6020% ( 99) 00:09:28.848 8685.494 - 8738.133: 11.5643% ( 109) 00:09:28.848 8738.133 - 8790.773: 12.5177% ( 108) 00:09:28.848 8790.773 - 8843.412: 13.5858% ( 121) 00:09:28.848 8843.412 - 8896.051: 14.6804% ( 124) 00:09:28.848 8896.051 - 8948.691: 15.7751% ( 124) 00:09:28.848 8948.691 - 9001.330: 16.7549% ( 111) 00:09:28.848 9001.330 - 9053.969: 17.7966% ( 118) 00:09:28.848 9053.969 - 9106.609: 18.8736% ( 122) 00:09:28.848 9106.609 - 9159.248: 20.0300% ( 131) 00:09:28.848 9159.248 - 9211.888: 21.1953% ( 132) 00:09:28.848 9211.888 - 9264.527: 22.5018% ( 148) 00:09:28.848 9264.527 - 9317.166: 23.7112% ( 137) 00:09:28.848 9317.166 - 9369.806: 24.9912% ( 145) 00:09:28.848 9369.806 - 9422.445: 26.3242% ( 151) 00:09:28.848 9422.445 - 9475.084: 27.7278% ( 159) 00:09:28.848 9475.084 - 9527.724: 29.0784% ( 153) 00:09:28.848 9527.724 - 9580.363: 30.4555% ( 156) 00:09:28.848 9580.363 - 9633.002: 31.9386% ( 168) 00:09:28.848 9633.002 - 9685.642: 33.3422% ( 159) 00:09:28.848 9685.642 - 9738.281: 34.6928% ( 153) 00:09:28.848 9738.281 - 9790.920: 35.9640% ( 144) 00:09:28.848 9790.920 - 9843.560: 37.0674% ( 125) 00:09:28.848 9843.560 - 9896.199: 38.2680% ( 136) 00:09:28.848 9896.199 - 9948.839: 39.4333% ( 132) 00:09:28.848 9948.839 - 10001.478: 40.4749% ( 118) 00:09:28.848 10001.478 - 10054.117: 41.4371% ( 109) 00:09:28.848 10054.117 - 10106.757: 42.3905% ( 108) 00:09:28.848 10106.757 - 10159.396: 43.3528% ( 109) 00:09:28.848 10159.396 - 10212.035: 44.2973% ( 107) 00:09:28.848 10212.035 - 10264.675: 45.1889% ( 101) 00:09:28.848 10264.675 - 10317.314: 46.1246% ( 106) 00:09:28.848 10317.314 - 10369.953: 46.9721% ( 96) 00:09:28.848 10369.953 - 10422.593: 48.0314% ( 120) 00:09:28.848 10422.593 - 10475.232: 49.0466% ( 115) 00:09:28.848 10475.232 - 10527.871: 50.0000% ( 108) 00:09:28.848 10527.871 - 10580.511: 50.9357% ( 106) 00:09:28.848 10580.511 - 10633.150: 51.8538% ( 104) 00:09:28.848 10633.150 - 10685.790: 52.7278% ( 99) 00:09:28.848 10685.790 - 10738.429: 53.6635% ( 106) 00:09:28.848 10738.429 - 10791.068: 54.6257% ( 109) 00:09:28.848 10791.068 - 10843.708: 55.6056% ( 111) 00:09:28.848 10843.708 - 10896.347: 56.5413% ( 106) 00:09:28.848 10896.347 - 10948.986: 57.3711% ( 94) 00:09:28.848 10948.986 - 11001.626: 58.3686% ( 113) 00:09:28.848 11001.626 - 11054.265: 59.2514% ( 100) 00:09:28.848 11054.265 - 11106.904: 60.1960% ( 107) 00:09:28.848 11106.904 - 11159.544: 61.1670% ( 110) 00:09:28.848 11159.544 - 11212.183: 62.0674% ( 102) 00:09:28.848 11212.183 - 11264.822: 62.9679% ( 102) 00:09:28.848 11264.822 - 11317.462: 63.9477% ( 111) 00:09:28.848 11317.462 - 11370.101: 64.9541% ( 114) 00:09:28.848 11370.101 - 11422.741: 65.8722% ( 104) 00:09:28.848 11422.741 - 11475.380: 66.8256% ( 108) 00:09:28.848 11475.380 - 11528.019: 67.8584% ( 117) 00:09:28.848 11528.019 - 11580.659: 68.7059% ( 96) 00:09:28.848 11580.659 - 11633.298: 69.5533% ( 96) 00:09:28.848 11633.298 - 11685.937: 70.4449% ( 101) 00:09:28.848 11685.937 - 11738.577: 71.2129% ( 87) 00:09:28.848 11738.577 - 11791.216: 71.9898% ( 88) 00:09:28.848 11791.216 - 11843.855: 72.6607% ( 76) 00:09:28.848 11843.855 - 11896.495: 73.3845% ( 82) 00:09:28.848 11896.495 - 11949.134: 73.9760% ( 67) 00:09:28.848 11949.134 - 12001.773: 74.5939% ( 70) 00:09:28.848 12001.773 - 12054.413: 75.2295% ( 72) 00:09:28.848 12054.413 - 12107.052: 75.8828% ( 74) 00:09:28.848 12107.052 - 12159.692: 76.5007% ( 70) 00:09:28.848 12159.692 - 12212.331: 77.1098% ( 69) 00:09:28.849 12212.331 - 12264.970: 77.7013% ( 67) 00:09:28.849 12264.970 - 12317.610: 78.2927% ( 67) 00:09:28.849 12317.610 - 12370.249: 78.8577% ( 64) 00:09:28.849 12370.249 - 12422.888: 79.4227% ( 64) 00:09:28.849 12422.888 - 12475.528: 80.0053% ( 66) 00:09:28.849 12475.528 - 12528.167: 80.4820% ( 54) 00:09:28.849 12528.167 - 12580.806: 80.9057% ( 48) 00:09:28.849 12580.806 - 12633.446: 81.3471% ( 50) 00:09:28.849 12633.446 - 12686.085: 81.7002% ( 40) 00:09:28.849 12686.085 - 12738.724: 82.0533% ( 40) 00:09:28.849 12738.724 - 12791.364: 82.3976% ( 39) 00:09:28.849 12791.364 - 12844.003: 82.6889% ( 33) 00:09:28.849 12844.003 - 12896.643: 83.0067% ( 36) 00:09:28.849 12896.643 - 12949.282: 83.2627% ( 29) 00:09:28.849 12949.282 - 13001.921: 83.4834% ( 25) 00:09:28.849 13001.921 - 13054.561: 83.7041% ( 25) 00:09:28.849 13054.561 - 13107.200: 83.9601% ( 29) 00:09:28.849 13107.200 - 13159.839: 84.1984% ( 27) 00:09:28.849 13159.839 - 13212.479: 84.4015% ( 23) 00:09:28.849 13212.479 - 13265.118: 84.6663% ( 30) 00:09:28.849 13265.118 - 13317.757: 84.9047% ( 27) 00:09:28.849 13317.757 - 13370.397: 85.1254% ( 25) 00:09:28.849 13370.397 - 13423.036: 85.3549% ( 26) 00:09:28.849 13423.036 - 13475.676: 85.5314% ( 20) 00:09:28.849 13475.676 - 13580.954: 85.9640% ( 49) 00:09:28.849 13580.954 - 13686.233: 86.3524% ( 44) 00:09:28.849 13686.233 - 13791.512: 86.7761% ( 48) 00:09:28.849 13791.512 - 13896.790: 87.2087% ( 49) 00:09:28.849 13896.790 - 14002.069: 87.6236% ( 47) 00:09:28.849 14002.069 - 14107.348: 88.0561% ( 49) 00:09:28.849 14107.348 - 14212.627: 88.4446% ( 44) 00:09:28.849 14212.627 - 14317.905: 88.8242% ( 43) 00:09:28.849 14317.905 - 14423.184: 89.2302% ( 46) 00:09:28.849 14423.184 - 14528.463: 89.5569% ( 37) 00:09:28.849 14528.463 - 14633.741: 89.9188% ( 41) 00:09:28.849 14633.741 - 14739.020: 90.2807% ( 41) 00:09:28.849 14739.020 - 14844.299: 90.6427% ( 41) 00:09:28.849 14844.299 - 14949.578: 90.9428% ( 34) 00:09:28.849 14949.578 - 15054.856: 91.2959% ( 40) 00:09:28.849 15054.856 - 15160.135: 91.6049% ( 35) 00:09:28.849 15160.135 - 15265.414: 91.9050% ( 34) 00:09:28.849 15265.414 - 15370.692: 92.2405% ( 38) 00:09:28.849 15370.692 - 15475.971: 92.4788% ( 27) 00:09:28.849 15475.971 - 15581.250: 92.7172% ( 27) 00:09:28.849 15581.250 - 15686.529: 92.9643% ( 28) 00:09:28.849 15686.529 - 15791.807: 93.2115% ( 28) 00:09:28.849 15791.807 - 15897.086: 93.4145% ( 23) 00:09:28.849 15897.086 - 16002.365: 93.5823% ( 19) 00:09:28.849 16002.365 - 16107.643: 93.8206% ( 27) 00:09:28.849 16107.643 - 16212.922: 93.9707% ( 17) 00:09:28.849 16212.922 - 16318.201: 94.1119% ( 16) 00:09:28.849 16318.201 - 16423.480: 94.2532% ( 16) 00:09:28.849 16423.480 - 16528.758: 94.3326% ( 9) 00:09:28.849 16528.758 - 16634.037: 94.4386% ( 12) 00:09:28.849 16634.037 - 16739.316: 94.5975% ( 18) 00:09:28.849 16739.316 - 16844.594: 94.7828% ( 21) 00:09:28.849 16844.594 - 16949.873: 94.9859% ( 23) 00:09:28.849 16949.873 - 17055.152: 95.1359% ( 17) 00:09:28.849 17055.152 - 17160.431: 95.2595% ( 14) 00:09:28.849 17160.431 - 17265.709: 95.4096% ( 17) 00:09:28.849 17265.709 - 17370.988: 95.5420% ( 15) 00:09:28.849 17370.988 - 17476.267: 95.6833% ( 16) 00:09:28.849 17476.267 - 17581.545: 95.8245% ( 16) 00:09:28.849 17581.545 - 17686.824: 95.9569% ( 15) 00:09:28.849 17686.824 - 17792.103: 96.0805% ( 14) 00:09:28.849 17792.103 - 17897.382: 96.1600% ( 9) 00:09:28.849 17897.382 - 18002.660: 96.2747% ( 13) 00:09:28.849 18002.660 - 18107.939: 96.3718% ( 11) 00:09:28.849 18107.939 - 18213.218: 96.4424% ( 8) 00:09:28.849 18213.218 - 18318.496: 96.5042% ( 7) 00:09:28.849 18318.496 - 18423.775: 96.5660% ( 7) 00:09:28.849 18423.775 - 18529.054: 96.6102% ( 5) 00:09:28.849 18739.611 - 18844.890: 96.6190% ( 1) 00:09:28.849 18844.890 - 18950.169: 96.6455% ( 3) 00:09:28.849 18950.169 - 19055.447: 96.6984% ( 6) 00:09:28.849 19055.447 - 19160.726: 96.7779% ( 9) 00:09:28.849 19160.726 - 19266.005: 96.8397% ( 7) 00:09:28.849 19266.005 - 19371.284: 96.9103% ( 8) 00:09:28.849 19371.284 - 19476.562: 96.9721% ( 7) 00:09:28.849 19476.562 - 19581.841: 97.0427% ( 8) 00:09:28.849 19581.841 - 19687.120: 97.0957% ( 6) 00:09:28.849 19687.120 - 19792.398: 97.1663% ( 8) 00:09:28.849 19792.398 - 19897.677: 97.2369% ( 8) 00:09:28.849 19897.677 - 20002.956: 97.3340% ( 11) 00:09:28.849 20002.956 - 20108.235: 97.4400% ( 12) 00:09:28.849 20108.235 - 20213.513: 97.5194% ( 9) 00:09:28.849 20213.513 - 20318.792: 97.6165% ( 11) 00:09:28.849 20318.792 - 20424.071: 97.7048% ( 10) 00:09:28.849 20424.071 - 20529.349: 97.8019% ( 11) 00:09:28.849 20529.349 - 20634.628: 97.9078% ( 12) 00:09:28.849 20634.628 - 20739.907: 98.0049% ( 11) 00:09:28.849 20739.907 - 20845.186: 98.0932% ( 10) 00:09:28.849 20845.186 - 20950.464: 98.1903% ( 11) 00:09:28.849 20950.464 - 21055.743: 98.2609% ( 8) 00:09:28.849 21055.743 - 21161.022: 98.3316% ( 8) 00:09:28.849 21161.022 - 21266.300: 98.3845% ( 6) 00:09:28.849 21266.300 - 21371.579: 98.4640% ( 9) 00:09:28.849 21371.579 - 21476.858: 98.5346% ( 8) 00:09:28.849 21476.858 - 21582.137: 98.5876% ( 6) 00:09:28.849 21582.137 - 21687.415: 98.6582% ( 8) 00:09:28.849 21687.415 - 21792.694: 98.7288% ( 8) 00:09:28.849 21792.694 - 21897.973: 98.7906% ( 7) 00:09:28.849 21897.973 - 22003.251: 98.8347% ( 5) 00:09:28.849 22003.251 - 22108.530: 98.8701% ( 4) 00:09:28.849 31162.500 - 31373.057: 98.8789% ( 1) 00:09:28.849 31373.057 - 31583.614: 98.9583% ( 9) 00:09:28.849 31583.614 - 31794.172: 99.0201% ( 7) 00:09:28.849 31794.172 - 32004.729: 99.0907% ( 8) 00:09:28.849 32004.729 - 32215.287: 99.1702% ( 9) 00:09:28.849 32215.287 - 32425.844: 99.2496% ( 9) 00:09:28.849 32425.844 - 32636.402: 99.3203% ( 8) 00:09:28.849 32636.402 - 32846.959: 99.4085% ( 10) 00:09:28.849 32846.959 - 33057.516: 99.4350% ( 3) 00:09:28.849 38953.124 - 39163.682: 99.4880% ( 6) 00:09:28.849 39163.682 - 39374.239: 99.5763% ( 10) 00:09:28.849 39374.239 - 39584.797: 99.6381% ( 7) 00:09:28.849 39584.797 - 39795.354: 99.7175% ( 9) 00:09:28.849 39795.354 - 40005.912: 99.7881% ( 8) 00:09:28.849 40005.912 - 40216.469: 99.8764% ( 10) 00:09:28.849 40216.469 - 40427.027: 99.9647% ( 10) 00:09:28.849 40427.027 - 40637.584: 100.0000% ( 4) 00:09:28.849 00:09:28.849 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:28.849 ============================================================================== 00:09:28.849 Range in us Cumulative IO count 00:09:28.849 6343.043 - 6369.362: 0.0177% ( 2) 00:09:28.849 6369.362 - 6395.682: 0.0265% ( 1) 00:09:28.849 6395.682 - 6422.002: 0.0441% ( 2) 00:09:28.849 6422.002 - 6448.321: 0.0618% ( 2) 00:09:28.849 6448.321 - 6474.641: 0.0706% ( 1) 00:09:28.849 6474.641 - 6500.961: 0.0883% ( 2) 00:09:28.849 6500.961 - 6527.280: 0.1059% ( 2) 00:09:28.849 6527.280 - 6553.600: 0.1324% ( 3) 00:09:28.849 6553.600 - 6579.920: 0.1412% ( 1) 00:09:28.849 6579.920 - 6606.239: 0.1589% ( 2) 00:09:28.849 6606.239 - 6632.559: 0.1766% ( 2) 00:09:28.849 6632.559 - 6658.879: 0.1942% ( 2) 00:09:28.849 6658.879 - 6685.198: 0.2119% ( 2) 00:09:28.849 6685.198 - 6711.518: 0.2207% ( 1) 00:09:28.849 6711.518 - 6737.838: 0.2383% ( 2) 00:09:28.849 6737.838 - 6790.477: 0.2737% ( 4) 00:09:28.850 6790.477 - 6843.116: 0.3001% ( 3) 00:09:28.850 6843.116 - 6895.756: 0.3355% ( 4) 00:09:28.850 6895.756 - 6948.395: 0.3619% ( 3) 00:09:28.850 6948.395 - 7001.035: 0.3972% ( 4) 00:09:28.850 7001.035 - 7053.674: 0.4326% ( 4) 00:09:28.850 7053.674 - 7106.313: 0.4590% ( 3) 00:09:28.850 7106.313 - 7158.953: 0.4944% ( 4) 00:09:28.850 7158.953 - 7211.592: 0.5297% ( 4) 00:09:28.850 7211.592 - 7264.231: 0.5650% ( 4) 00:09:28.850 7580.067 - 7632.707: 0.6091% ( 5) 00:09:28.850 7632.707 - 7685.346: 0.6532% ( 5) 00:09:28.850 7685.346 - 7737.986: 0.7857% ( 15) 00:09:28.850 7737.986 - 7790.625: 0.9446% ( 18) 00:09:28.850 7790.625 - 7843.264: 1.1299% ( 21) 00:09:28.850 7843.264 - 7895.904: 1.3948% ( 30) 00:09:28.850 7895.904 - 7948.543: 1.7479% ( 40) 00:09:28.850 7948.543 - 8001.182: 2.1540% ( 46) 00:09:28.850 8001.182 - 8053.822: 2.6395% ( 55) 00:09:28.850 8053.822 - 8106.461: 3.1515% ( 58) 00:09:28.850 8106.461 - 8159.100: 3.6811% ( 60) 00:09:28.850 8159.100 - 8211.740: 4.2991% ( 70) 00:09:28.850 8211.740 - 8264.379: 4.9700% ( 76) 00:09:28.850 8264.379 - 8317.018: 5.6409% ( 76) 00:09:28.850 8317.018 - 8369.658: 6.3206% ( 77) 00:09:28.850 8369.658 - 8422.297: 7.0268% ( 80) 00:09:28.850 8422.297 - 8474.937: 7.7066% ( 77) 00:09:28.850 8474.937 - 8527.576: 8.4481% ( 84) 00:09:28.850 8527.576 - 8580.215: 9.1455% ( 79) 00:09:28.850 8580.215 - 8632.855: 9.9400% ( 90) 00:09:28.850 8632.855 - 8685.494: 10.8492% ( 103) 00:09:28.850 8685.494 - 8738.133: 11.7143% ( 98) 00:09:28.850 8738.133 - 8790.773: 12.5971% ( 100) 00:09:28.850 8790.773 - 8843.412: 13.5946% ( 113) 00:09:28.850 8843.412 - 8896.051: 14.5657% ( 110) 00:09:28.850 8896.051 - 8948.691: 15.6603% ( 124) 00:09:28.850 8948.691 - 9001.330: 16.7373% ( 122) 00:09:28.850 9001.330 - 9053.969: 17.8496% ( 126) 00:09:28.850 9053.969 - 9106.609: 18.9442% ( 124) 00:09:28.850 9106.609 - 9159.248: 20.0565% ( 126) 00:09:28.850 9159.248 - 9211.888: 21.2659% ( 137) 00:09:28.850 9211.888 - 9264.527: 22.5459% ( 145) 00:09:28.850 9264.527 - 9317.166: 23.9054% ( 154) 00:09:28.850 9317.166 - 9369.806: 25.2825% ( 156) 00:09:28.850 9369.806 - 9422.445: 26.7832% ( 170) 00:09:28.850 9422.445 - 9475.084: 28.3369% ( 176) 00:09:28.850 9475.084 - 9527.724: 29.9082% ( 178) 00:09:28.850 9527.724 - 9580.363: 31.5060% ( 181) 00:09:28.850 9580.363 - 9633.002: 32.9714% ( 166) 00:09:28.850 9633.002 - 9685.642: 34.2867% ( 149) 00:09:28.850 9685.642 - 9738.281: 35.6109% ( 150) 00:09:28.850 9738.281 - 9790.920: 36.9703% ( 154) 00:09:28.850 9790.920 - 9843.560: 38.1179% ( 130) 00:09:28.850 9843.560 - 9896.199: 39.3273% ( 137) 00:09:28.850 9896.199 - 9948.839: 40.4220% ( 124) 00:09:28.850 9948.839 - 10001.478: 41.4725% ( 119) 00:09:28.850 10001.478 - 10054.117: 42.4700% ( 113) 00:09:28.850 10054.117 - 10106.757: 43.4675% ( 113) 00:09:28.850 10106.757 - 10159.396: 44.3768% ( 103) 00:09:28.850 10159.396 - 10212.035: 45.2684% ( 101) 00:09:28.850 10212.035 - 10264.675: 46.1335% ( 98) 00:09:28.850 10264.675 - 10317.314: 46.9544% ( 93) 00:09:28.850 10317.314 - 10369.953: 47.6960% ( 84) 00:09:28.850 10369.953 - 10422.593: 48.4640% ( 87) 00:09:28.850 10422.593 - 10475.232: 49.2496% ( 89) 00:09:28.850 10475.232 - 10527.871: 50.1324% ( 100) 00:09:28.850 10527.871 - 10580.511: 51.0152% ( 100) 00:09:28.850 10580.511 - 10633.150: 51.9862% ( 110) 00:09:28.850 10633.150 - 10685.790: 53.0191% ( 117) 00:09:28.850 10685.790 - 10738.429: 53.9195% ( 102) 00:09:28.850 10738.429 - 10791.068: 54.8641% ( 107) 00:09:28.850 10791.068 - 10843.708: 55.8174% ( 108) 00:09:28.850 10843.708 - 10896.347: 56.7620% ( 107) 00:09:28.850 10896.347 - 10948.986: 57.7419% ( 111) 00:09:28.850 10948.986 - 11001.626: 58.7394% ( 113) 00:09:28.850 11001.626 - 11054.265: 59.7193% ( 111) 00:09:28.850 11054.265 - 11106.904: 60.7256% ( 114) 00:09:28.850 11106.904 - 11159.544: 61.6967% ( 110) 00:09:28.850 11159.544 - 11212.183: 62.7295% ( 117) 00:09:28.850 11212.183 - 11264.822: 63.7182% ( 112) 00:09:28.850 11264.822 - 11317.462: 64.6628% ( 107) 00:09:28.850 11317.462 - 11370.101: 65.6780% ( 115) 00:09:28.850 11370.101 - 11422.741: 66.6225% ( 107) 00:09:28.850 11422.741 - 11475.380: 67.5936% ( 110) 00:09:28.850 11475.380 - 11528.019: 68.4763% ( 100) 00:09:28.850 11528.019 - 11580.659: 69.2708% ( 90) 00:09:28.850 11580.659 - 11633.298: 70.1359% ( 98) 00:09:28.850 11633.298 - 11685.937: 70.9834% ( 96) 00:09:28.850 11685.937 - 11738.577: 71.7867% ( 91) 00:09:28.850 11738.577 - 11791.216: 72.6783% ( 101) 00:09:28.850 11791.216 - 11843.855: 73.4287% ( 85) 00:09:28.850 11843.855 - 11896.495: 74.1172% ( 78) 00:09:28.850 11896.495 - 11949.134: 74.7617% ( 73) 00:09:28.850 11949.134 - 12001.773: 75.4061% ( 73) 00:09:28.850 12001.773 - 12054.413: 75.9710% ( 64) 00:09:28.850 12054.413 - 12107.052: 76.5625% ( 67) 00:09:28.850 12107.052 - 12159.692: 77.1098% ( 62) 00:09:28.850 12159.692 - 12212.331: 77.7278% ( 70) 00:09:28.850 12212.331 - 12264.970: 78.2839% ( 63) 00:09:28.850 12264.970 - 12317.610: 78.8400% ( 63) 00:09:28.850 12317.610 - 12370.249: 79.3520% ( 58) 00:09:28.850 12370.249 - 12422.888: 79.8199% ( 53) 00:09:28.850 12422.888 - 12475.528: 80.2966% ( 54) 00:09:28.850 12475.528 - 12528.167: 80.7115% ( 47) 00:09:28.850 12528.167 - 12580.806: 81.1794% ( 53) 00:09:28.850 12580.806 - 12633.446: 81.6208% ( 50) 00:09:28.850 12633.446 - 12686.085: 82.0268% ( 46) 00:09:28.850 12686.085 - 12738.724: 82.3976% ( 42) 00:09:28.850 12738.724 - 12791.364: 82.7419% ( 39) 00:09:28.850 12791.364 - 12844.003: 83.0862% ( 39) 00:09:28.850 12844.003 - 12896.643: 83.3951% ( 35) 00:09:28.850 12896.643 - 12949.282: 83.6953% ( 34) 00:09:28.850 12949.282 - 13001.921: 83.9689% ( 31) 00:09:28.850 13001.921 - 13054.561: 84.2867% ( 36) 00:09:28.850 13054.561 - 13107.200: 84.5604% ( 31) 00:09:28.850 13107.200 - 13159.839: 84.7899% ( 26) 00:09:28.850 13159.839 - 13212.479: 85.0106% ( 25) 00:09:28.850 13212.479 - 13265.118: 85.1871% ( 20) 00:09:28.850 13265.118 - 13317.757: 85.3460% ( 18) 00:09:28.850 13317.757 - 13370.397: 85.5226% ( 20) 00:09:28.850 13370.397 - 13423.036: 85.7345% ( 24) 00:09:28.850 13423.036 - 13475.676: 85.9375% ( 23) 00:09:28.850 13475.676 - 13580.954: 86.3612% ( 48) 00:09:28.850 13580.954 - 13686.233: 86.7320% ( 42) 00:09:28.850 13686.233 - 13791.512: 87.0763% ( 39) 00:09:28.850 13791.512 - 13896.790: 87.4117% ( 38) 00:09:28.850 13896.790 - 14002.069: 87.7295% ( 36) 00:09:28.850 14002.069 - 14107.348: 88.0120% ( 32) 00:09:28.850 14107.348 - 14212.627: 88.3121% ( 34) 00:09:28.850 14212.627 - 14317.905: 88.6917% ( 43) 00:09:28.850 14317.905 - 14423.184: 89.0184% ( 37) 00:09:28.850 14423.184 - 14528.463: 89.3450% ( 37) 00:09:28.850 14528.463 - 14633.741: 89.6363% ( 33) 00:09:28.850 14633.741 - 14739.020: 89.9188% ( 32) 00:09:28.850 14739.020 - 14844.299: 90.2895% ( 42) 00:09:28.850 14844.299 - 14949.578: 90.6691% ( 43) 00:09:28.850 14949.578 - 15054.856: 90.9781% ( 35) 00:09:28.850 15054.856 - 15160.135: 91.2782% ( 34) 00:09:28.850 15160.135 - 15265.414: 91.6225% ( 39) 00:09:28.850 15265.414 - 15370.692: 91.9845% ( 41) 00:09:28.850 15370.692 - 15475.971: 92.3464% ( 41) 00:09:28.850 15475.971 - 15581.250: 92.6289% ( 32) 00:09:28.850 15581.250 - 15686.529: 92.9555% ( 37) 00:09:28.850 15686.529 - 15791.807: 93.2292% ( 31) 00:09:28.850 15791.807 - 15897.086: 93.5117% ( 32) 00:09:28.850 15897.086 - 16002.365: 93.8118% ( 34) 00:09:28.850 16002.365 - 16107.643: 94.0590% ( 28) 00:09:28.850 16107.643 - 16212.922: 94.2532% ( 22) 00:09:28.850 16212.922 - 16318.201: 94.4297% ( 20) 00:09:28.850 16318.201 - 16423.480: 94.5445% ( 13) 00:09:28.850 16423.480 - 16528.758: 94.6504% ( 12) 00:09:28.851 16528.758 - 16634.037: 94.7210% ( 8) 00:09:28.851 16634.037 - 16739.316: 94.8093% ( 10) 00:09:28.851 16739.316 - 16844.594: 94.8446% ( 4) 00:09:28.851 16844.594 - 16949.873: 94.8799% ( 4) 00:09:28.851 16949.873 - 17055.152: 94.9153% ( 4) 00:09:28.851 17265.709 - 17370.988: 94.9241% ( 1) 00:09:28.851 17370.988 - 17476.267: 94.9682% ( 5) 00:09:28.851 17476.267 - 17581.545: 95.0477% ( 9) 00:09:28.851 17581.545 - 17686.824: 95.1271% ( 9) 00:09:28.851 17686.824 - 17792.103: 95.2154% ( 10) 00:09:28.851 17792.103 - 17897.382: 95.3125% ( 11) 00:09:28.851 17897.382 - 18002.660: 95.4537% ( 16) 00:09:28.851 18002.660 - 18107.939: 95.5950% ( 16) 00:09:28.851 18107.939 - 18213.218: 95.7451% ( 17) 00:09:28.851 18213.218 - 18318.496: 95.8863% ( 16) 00:09:28.851 18318.496 - 18423.775: 96.0187% ( 15) 00:09:28.851 18423.775 - 18529.054: 96.1776% ( 18) 00:09:28.851 18529.054 - 18634.333: 96.3100% ( 15) 00:09:28.851 18634.333 - 18739.611: 96.3983% ( 10) 00:09:28.851 18739.611 - 18844.890: 96.4866% ( 10) 00:09:28.851 18844.890 - 18950.169: 96.5749% ( 10) 00:09:28.851 18950.169 - 19055.447: 96.6102% ( 4) 00:09:28.851 19266.005 - 19371.284: 96.6190% ( 1) 00:09:28.851 19371.284 - 19476.562: 96.6720% ( 6) 00:09:28.851 19476.562 - 19581.841: 96.7161% ( 5) 00:09:28.851 19581.841 - 19687.120: 96.7514% ( 4) 00:09:28.851 19687.120 - 19792.398: 96.8573% ( 12) 00:09:28.851 19792.398 - 19897.677: 96.9633% ( 12) 00:09:28.851 19897.677 - 20002.956: 97.0692% ( 12) 00:09:28.851 20002.956 - 20108.235: 97.1840% ( 13) 00:09:28.851 20108.235 - 20213.513: 97.2899% ( 12) 00:09:28.851 20213.513 - 20318.792: 97.4047% ( 13) 00:09:28.851 20318.792 - 20424.071: 97.5194% ( 13) 00:09:28.851 20424.071 - 20529.349: 97.6342% ( 13) 00:09:28.851 20529.349 - 20634.628: 97.7401% ( 12) 00:09:28.851 20634.628 - 20739.907: 97.8637% ( 14) 00:09:28.851 20739.907 - 20845.186: 97.9431% ( 9) 00:09:28.851 20845.186 - 20950.464: 98.0138% ( 8) 00:09:28.851 20950.464 - 21055.743: 98.1197% ( 12) 00:09:28.851 21055.743 - 21161.022: 98.2345% ( 13) 00:09:28.851 21161.022 - 21266.300: 98.3581% ( 14) 00:09:28.851 21266.300 - 21371.579: 98.4640% ( 12) 00:09:28.851 21371.579 - 21476.858: 98.5434% ( 9) 00:09:28.851 21476.858 - 21582.137: 98.5876% ( 5) 00:09:28.851 21582.137 - 21687.415: 98.6229% ( 4) 00:09:28.851 21687.415 - 21792.694: 98.6670% ( 5) 00:09:28.851 21792.694 - 21897.973: 98.7112% ( 5) 00:09:28.851 21897.973 - 22003.251: 98.7553% ( 5) 00:09:28.851 22003.251 - 22108.530: 98.7994% ( 5) 00:09:28.851 22108.530 - 22213.809: 98.8524% ( 6) 00:09:28.851 22213.809 - 22319.088: 98.8701% ( 2) 00:09:28.851 31794.172 - 32004.729: 98.9054% ( 4) 00:09:28.851 32004.729 - 32215.287: 98.9936% ( 10) 00:09:28.851 32215.287 - 32425.844: 99.0819% ( 10) 00:09:28.851 32425.844 - 32636.402: 99.1790% ( 11) 00:09:28.851 32636.402 - 32846.959: 99.2585% ( 9) 00:09:28.851 32846.959 - 33057.516: 99.3379% ( 9) 00:09:28.851 33057.516 - 33268.074: 99.4174% ( 9) 00:09:28.851 33268.074 - 33478.631: 99.4350% ( 2) 00:09:28.851 38110.895 - 38321.452: 99.4615% ( 3) 00:09:28.851 38321.452 - 38532.010: 99.5498% ( 10) 00:09:28.851 38532.010 - 38742.567: 99.6381% ( 10) 00:09:28.851 38742.567 - 38953.124: 99.7175% ( 9) 00:09:28.851 38953.124 - 39163.682: 99.7881% ( 8) 00:09:28.851 39163.682 - 39374.239: 99.8676% ( 9) 00:09:28.851 39374.239 - 39584.797: 99.9559% ( 10) 00:09:28.851 39584.797 - 39795.354: 100.0000% ( 5) 00:09:28.851 00:09:28.851 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:28.851 ============================================================================== 00:09:28.851 Range in us Cumulative IO count 00:09:28.851 5632.411 - 5658.731: 0.0088% ( 1) 00:09:28.851 5658.731 - 5685.051: 0.0177% ( 1) 00:09:28.851 5685.051 - 5711.370: 0.0265% ( 1) 00:09:28.851 5711.370 - 5737.690: 0.0441% ( 2) 00:09:28.851 5737.690 - 5764.010: 0.0618% ( 2) 00:09:28.851 5764.010 - 5790.329: 0.0706% ( 1) 00:09:28.851 5790.329 - 5816.649: 0.0883% ( 2) 00:09:28.851 5816.649 - 5842.969: 0.1059% ( 2) 00:09:28.851 5842.969 - 5869.288: 0.1236% ( 2) 00:09:28.851 5869.288 - 5895.608: 0.1501% ( 3) 00:09:28.851 5895.608 - 5921.928: 0.1589% ( 1) 00:09:28.851 5921.928 - 5948.247: 0.1766% ( 2) 00:09:28.851 5948.247 - 5974.567: 0.2030% ( 3) 00:09:28.851 5974.567 - 6000.887: 0.2207% ( 2) 00:09:28.851 6000.887 - 6027.206: 0.2383% ( 2) 00:09:28.851 6027.206 - 6053.526: 0.2560% ( 2) 00:09:28.851 6053.526 - 6079.846: 0.2737% ( 2) 00:09:28.851 6079.846 - 6106.165: 0.2825% ( 1) 00:09:28.851 6106.165 - 6132.485: 0.2913% ( 1) 00:09:28.851 6132.485 - 6158.805: 0.3090% ( 2) 00:09:28.851 6158.805 - 6185.124: 0.3266% ( 2) 00:09:28.851 6185.124 - 6211.444: 0.3355% ( 1) 00:09:28.851 6211.444 - 6237.764: 0.3443% ( 1) 00:09:28.851 6237.764 - 6264.084: 0.3619% ( 2) 00:09:28.851 6264.084 - 6290.403: 0.3796% ( 2) 00:09:28.851 6290.403 - 6316.723: 0.3884% ( 1) 00:09:28.851 6316.723 - 6343.043: 0.4061% ( 2) 00:09:28.851 6343.043 - 6369.362: 0.4237% ( 2) 00:09:28.851 6369.362 - 6395.682: 0.4414% ( 2) 00:09:28.851 6395.682 - 6422.002: 0.4590% ( 2) 00:09:28.851 6422.002 - 6448.321: 0.4679% ( 1) 00:09:28.851 6448.321 - 6474.641: 0.4855% ( 2) 00:09:28.851 6474.641 - 6500.961: 0.4944% ( 1) 00:09:28.851 6500.961 - 6527.280: 0.5120% ( 2) 00:09:28.851 6527.280 - 6553.600: 0.5297% ( 2) 00:09:28.851 6553.600 - 6579.920: 0.5473% ( 2) 00:09:28.851 6579.920 - 6606.239: 0.5561% ( 1) 00:09:28.851 6606.239 - 6632.559: 0.5650% ( 1) 00:09:28.851 7316.871 - 7369.510: 0.5738% ( 1) 00:09:28.851 7369.510 - 7422.149: 0.6003% ( 3) 00:09:28.851 7422.149 - 7474.789: 0.6356% ( 4) 00:09:28.851 7474.789 - 7527.428: 0.6709% ( 4) 00:09:28.851 7527.428 - 7580.067: 0.7239% ( 6) 00:09:28.851 7580.067 - 7632.707: 0.7945% ( 8) 00:09:28.851 7632.707 - 7685.346: 0.9004% ( 12) 00:09:28.851 7685.346 - 7737.986: 1.0328% ( 15) 00:09:28.851 7737.986 - 7790.625: 1.2624% ( 26) 00:09:28.851 7790.625 - 7843.264: 1.4742% ( 24) 00:09:28.851 7843.264 - 7895.904: 1.7655% ( 33) 00:09:28.851 7895.904 - 7948.543: 2.1010% ( 38) 00:09:28.851 7948.543 - 8001.182: 2.5247% ( 48) 00:09:28.851 8001.182 - 8053.822: 3.0367% ( 58) 00:09:28.851 8053.822 - 8106.461: 3.5311% ( 56) 00:09:28.851 8106.461 - 8159.100: 4.0872% ( 63) 00:09:28.851 8159.100 - 8211.740: 4.6610% ( 65) 00:09:28.851 8211.740 - 8264.379: 5.1907% ( 60) 00:09:28.851 8264.379 - 8317.018: 5.8351% ( 73) 00:09:28.851 8317.018 - 8369.658: 6.4972% ( 75) 00:09:28.851 8369.658 - 8422.297: 7.1593% ( 75) 00:09:28.851 8422.297 - 8474.937: 7.8919% ( 83) 00:09:28.851 8474.937 - 8527.576: 8.5717% ( 77) 00:09:28.851 8527.576 - 8580.215: 9.3044% ( 83) 00:09:28.851 8580.215 - 8632.855: 10.0812% ( 88) 00:09:28.851 8632.855 - 8685.494: 10.7963% ( 81) 00:09:28.851 8685.494 - 8738.133: 11.5378% ( 84) 00:09:28.851 8738.133 - 8790.773: 12.4294% ( 101) 00:09:28.851 8790.773 - 8843.412: 13.4181% ( 112) 00:09:28.851 8843.412 - 8896.051: 14.4421% ( 116) 00:09:28.851 8896.051 - 8948.691: 15.6073% ( 132) 00:09:28.851 8948.691 - 9001.330: 16.6667% ( 120) 00:09:28.851 9001.330 - 9053.969: 17.6907% ( 116) 00:09:28.851 9053.969 - 9106.609: 18.7677% ( 122) 00:09:28.851 9106.609 - 9159.248: 19.8535% ( 123) 00:09:28.851 9159.248 - 9211.888: 20.9746% ( 127) 00:09:28.852 9211.888 - 9264.527: 22.1487% ( 133) 00:09:28.852 9264.527 - 9317.166: 23.4905% ( 152) 00:09:28.852 9317.166 - 9369.806: 25.0000% ( 171) 00:09:28.852 9369.806 - 9422.445: 26.5095% ( 171) 00:09:28.852 9422.445 - 9475.084: 28.0279% ( 172) 00:09:28.852 9475.084 - 9527.724: 29.5198% ( 169) 00:09:28.852 9527.724 - 9580.363: 30.9057% ( 157) 00:09:28.852 9580.363 - 9633.002: 32.2210% ( 149) 00:09:28.852 9633.002 - 9685.642: 33.5629% ( 152) 00:09:28.852 9685.642 - 9738.281: 34.8958% ( 151) 00:09:28.852 9738.281 - 9790.920: 36.2818% ( 157) 00:09:28.852 9790.920 - 9843.560: 37.4735% ( 135) 00:09:28.852 9843.560 - 9896.199: 38.5858% ( 126) 00:09:28.852 9896.199 - 9948.839: 39.6451% ( 120) 00:09:28.852 9948.839 - 10001.478: 40.7574% ( 126) 00:09:28.852 10001.478 - 10054.117: 41.8432% ( 123) 00:09:28.852 10054.117 - 10106.757: 42.9202% ( 122) 00:09:28.852 10106.757 - 10159.396: 43.9883% ( 121) 00:09:28.852 10159.396 - 10212.035: 45.0212% ( 117) 00:09:28.852 10212.035 - 10264.675: 46.0982% ( 122) 00:09:28.852 10264.675 - 10317.314: 47.0604% ( 109) 00:09:28.852 10317.314 - 10369.953: 47.8990% ( 95) 00:09:28.852 10369.953 - 10422.593: 48.6935% ( 90) 00:09:28.852 10422.593 - 10475.232: 49.5056% ( 92) 00:09:28.852 10475.232 - 10527.871: 50.2383% ( 83) 00:09:28.852 10527.871 - 10580.511: 51.1388% ( 102) 00:09:28.852 10580.511 - 10633.150: 52.0569% ( 104) 00:09:28.852 10633.150 - 10685.790: 52.9043% ( 96) 00:09:28.852 10685.790 - 10738.429: 53.8577% ( 108) 00:09:28.852 10738.429 - 10791.068: 54.7581% ( 102) 00:09:28.852 10791.068 - 10843.708: 55.6409% ( 100) 00:09:28.852 10843.708 - 10896.347: 56.5413% ( 102) 00:09:28.852 10896.347 - 10948.986: 57.3888% ( 96) 00:09:28.852 10948.986 - 11001.626: 58.2892% ( 102) 00:09:28.852 11001.626 - 11054.265: 59.2426% ( 108) 00:09:28.852 11054.265 - 11106.904: 60.1077% ( 98) 00:09:28.852 11106.904 - 11159.544: 61.0081% ( 102) 00:09:28.852 11159.544 - 11212.183: 61.9527% ( 107) 00:09:28.852 11212.183 - 11264.822: 62.8884% ( 106) 00:09:28.852 11264.822 - 11317.462: 63.7888% ( 102) 00:09:28.852 11317.462 - 11370.101: 64.7334% ( 107) 00:09:28.852 11370.101 - 11422.741: 65.6868% ( 108) 00:09:28.852 11422.741 - 11475.380: 66.6314% ( 107) 00:09:28.852 11475.380 - 11528.019: 67.6642% ( 117) 00:09:28.852 11528.019 - 11580.659: 68.6529% ( 112) 00:09:28.852 11580.659 - 11633.298: 69.5180% ( 98) 00:09:28.852 11633.298 - 11685.937: 70.3566% ( 95) 00:09:28.852 11685.937 - 11738.577: 71.1600% ( 91) 00:09:28.852 11738.577 - 11791.216: 71.9103% ( 85) 00:09:28.852 11791.216 - 11843.855: 72.6783% ( 87) 00:09:28.852 11843.855 - 11896.495: 73.3581% ( 77) 00:09:28.852 11896.495 - 11949.134: 74.0819% ( 82) 00:09:28.852 11949.134 - 12001.773: 74.7087% ( 71) 00:09:28.852 12001.773 - 12054.413: 75.3355% ( 71) 00:09:28.852 12054.413 - 12107.052: 75.9181% ( 66) 00:09:28.852 12107.052 - 12159.692: 76.5360% ( 70) 00:09:28.852 12159.692 - 12212.331: 77.1275% ( 67) 00:09:28.852 12212.331 - 12264.970: 77.6483% ( 59) 00:09:28.852 12264.970 - 12317.610: 78.1603% ( 58) 00:09:28.852 12317.610 - 12370.249: 78.7429% ( 66) 00:09:28.852 12370.249 - 12422.888: 79.2373% ( 56) 00:09:28.852 12422.888 - 12475.528: 79.6787% ( 50) 00:09:28.852 12475.528 - 12528.167: 80.1377% ( 52) 00:09:28.852 12528.167 - 12580.806: 80.5173% ( 43) 00:09:28.852 12580.806 - 12633.446: 80.9145% ( 45) 00:09:28.852 12633.446 - 12686.085: 81.3294% ( 47) 00:09:28.852 12686.085 - 12738.724: 81.6826% ( 40) 00:09:28.852 12738.724 - 12791.364: 82.0357% ( 40) 00:09:28.852 12791.364 - 12844.003: 82.3270% ( 33) 00:09:28.852 12844.003 - 12896.643: 82.6359% ( 35) 00:09:28.852 12896.643 - 12949.282: 82.9273% ( 33) 00:09:28.852 12949.282 - 13001.921: 83.2186% ( 33) 00:09:28.852 13001.921 - 13054.561: 83.5099% ( 33) 00:09:28.852 13054.561 - 13107.200: 83.7482% ( 27) 00:09:28.852 13107.200 - 13159.839: 84.0219% ( 31) 00:09:28.852 13159.839 - 13212.479: 84.3044% ( 32) 00:09:28.852 13212.479 - 13265.118: 84.5692% ( 30) 00:09:28.852 13265.118 - 13317.757: 84.8164% ( 28) 00:09:28.852 13317.757 - 13370.397: 85.0547% ( 27) 00:09:28.852 13370.397 - 13423.036: 85.3196% ( 30) 00:09:28.852 13423.036 - 13475.676: 85.5579% ( 27) 00:09:28.852 13475.676 - 13580.954: 86.0787% ( 59) 00:09:28.852 13580.954 - 13686.233: 86.5731% ( 56) 00:09:28.852 13686.233 - 13791.512: 87.0586% ( 55) 00:09:28.852 13791.512 - 13896.790: 87.5883% ( 60) 00:09:28.852 13896.790 - 14002.069: 88.0297% ( 50) 00:09:28.852 14002.069 - 14107.348: 88.4093% ( 43) 00:09:28.852 14107.348 - 14212.627: 88.8065% ( 45) 00:09:28.852 14212.627 - 14317.905: 89.2126% ( 46) 00:09:28.852 14317.905 - 14423.184: 89.5215% ( 35) 00:09:28.852 14423.184 - 14528.463: 89.8570% ( 38) 00:09:28.852 14528.463 - 14633.741: 90.2101% ( 40) 00:09:28.852 14633.741 - 14739.020: 90.5367% ( 37) 00:09:28.852 14739.020 - 14844.299: 90.8810% ( 39) 00:09:28.852 14844.299 - 14949.578: 91.2341% ( 40) 00:09:28.852 14949.578 - 15054.856: 91.6049% ( 42) 00:09:28.852 15054.856 - 15160.135: 91.9227% ( 36) 00:09:28.852 15160.135 - 15265.414: 92.2228% ( 34) 00:09:28.852 15265.414 - 15370.692: 92.4965% ( 31) 00:09:28.852 15370.692 - 15475.971: 92.7348% ( 27) 00:09:28.852 15475.971 - 15581.250: 92.9555% ( 25) 00:09:28.852 15581.250 - 15686.529: 93.1674% ( 24) 00:09:28.852 15686.529 - 15791.807: 93.3881% ( 25) 00:09:28.852 15791.807 - 15897.086: 93.5999% ( 24) 00:09:28.852 15897.086 - 16002.365: 93.7941% ( 22) 00:09:28.852 16002.365 - 16107.643: 93.9795% ( 21) 00:09:28.852 16107.643 - 16212.922: 94.1208% ( 16) 00:09:28.852 16212.922 - 16318.201: 94.2179% ( 11) 00:09:28.852 16318.201 - 16423.480: 94.3503% ( 15) 00:09:28.852 16423.480 - 16528.758: 94.4915% ( 16) 00:09:28.852 16528.758 - 16634.037: 94.6151% ( 14) 00:09:28.852 16634.037 - 16739.316: 94.7475% ( 15) 00:09:28.852 16739.316 - 16844.594: 94.8623% ( 13) 00:09:28.852 16844.594 - 16949.873: 94.9594% ( 11) 00:09:28.852 16949.873 - 17055.152: 95.0300% ( 8) 00:09:28.852 17055.152 - 17160.431: 95.1271% ( 11) 00:09:28.852 17160.431 - 17265.709: 95.2331% ( 12) 00:09:28.852 17265.709 - 17370.988: 95.3125% ( 9) 00:09:28.852 17370.988 - 17476.267: 95.4096% ( 11) 00:09:28.852 17476.267 - 17581.545: 95.5067% ( 11) 00:09:28.852 17581.545 - 17686.824: 95.6038% ( 11) 00:09:28.852 17686.824 - 17792.103: 95.7362% ( 15) 00:09:28.852 17792.103 - 17897.382: 95.8333% ( 11) 00:09:28.852 17897.382 - 18002.660: 95.9304% ( 11) 00:09:28.852 18002.660 - 18107.939: 96.0364% ( 12) 00:09:28.852 18107.939 - 18213.218: 96.1335% ( 11) 00:09:28.852 18213.218 - 18318.496: 96.2129% ( 9) 00:09:28.852 18318.496 - 18423.775: 96.3100% ( 11) 00:09:28.852 18423.775 - 18529.054: 96.4071% ( 11) 00:09:28.852 18529.054 - 18634.333: 96.5131% ( 12) 00:09:28.852 18634.333 - 18739.611: 96.6455% ( 15) 00:09:28.852 18739.611 - 18844.890: 96.7602% ( 13) 00:09:28.852 18844.890 - 18950.169: 96.8838% ( 14) 00:09:28.852 18950.169 - 19055.447: 97.0074% ( 14) 00:09:28.852 19055.447 - 19160.726: 97.0780% ( 8) 00:09:28.852 19160.726 - 19266.005: 97.1928% ( 13) 00:09:28.852 19266.005 - 19371.284: 97.3164% ( 14) 00:09:28.852 19371.284 - 19476.562: 97.4400% ( 14) 00:09:28.852 19476.562 - 19581.841: 97.5547% ( 13) 00:09:28.852 19581.841 - 19687.120: 97.6695% ( 13) 00:09:28.852 19687.120 - 19792.398: 97.7931% ( 14) 00:09:28.852 19792.398 - 19897.677: 97.9167% ( 14) 00:09:28.852 19897.677 - 20002.956: 98.0226% ( 12) 00:09:28.852 20002.956 - 20108.235: 98.1462% ( 14) 00:09:28.852 20108.235 - 20213.513: 98.1903% ( 5) 00:09:28.852 20213.513 - 20318.792: 98.2168% ( 3) 00:09:28.852 20318.792 - 20424.071: 98.2609% ( 5) 00:09:28.852 20424.071 - 20529.349: 98.3051% ( 5) 00:09:28.852 21055.743 - 21161.022: 98.3227% ( 2) 00:09:28.852 21161.022 - 21266.300: 98.3581% ( 4) 00:09:28.852 21266.300 - 21371.579: 98.3934% ( 4) 00:09:28.852 21371.579 - 21476.858: 98.4287% ( 4) 00:09:28.852 21476.858 - 21582.137: 98.4728% ( 5) 00:09:28.852 21582.137 - 21687.415: 98.5081% ( 4) 00:09:28.852 21687.415 - 21792.694: 98.5434% ( 4) 00:09:28.852 21792.694 - 21897.973: 98.5787% ( 4) 00:09:28.852 21897.973 - 22003.251: 98.6141% ( 4) 00:09:28.853 22003.251 - 22108.530: 98.6582% ( 5) 00:09:28.853 22108.530 - 22213.809: 98.6935% ( 4) 00:09:28.853 22213.809 - 22319.088: 98.7288% ( 4) 00:09:28.853 22319.088 - 22424.366: 98.7641% ( 4) 00:09:28.853 22424.366 - 22529.645: 98.7906% ( 3) 00:09:28.853 22529.645 - 22634.924: 98.8259% ( 4) 00:09:28.853 22634.924 - 22740.202: 98.8612% ( 4) 00:09:28.853 22740.202 - 22845.481: 98.8701% ( 1) 00:09:28.853 31794.172 - 32004.729: 98.9319% ( 7) 00:09:28.853 32004.729 - 32215.287: 99.0201% ( 10) 00:09:28.853 32215.287 - 32425.844: 99.0996% ( 9) 00:09:28.853 32425.844 - 32636.402: 99.1967% ( 11) 00:09:28.853 32636.402 - 32846.959: 99.2850% ( 10) 00:09:28.853 32846.959 - 33057.516: 99.3821% ( 11) 00:09:28.853 33057.516 - 33268.074: 99.4350% ( 6) 00:09:28.853 37900.337 - 38110.895: 99.4880% ( 6) 00:09:28.853 38110.895 - 38321.452: 99.5586% ( 8) 00:09:28.853 38321.452 - 38532.010: 99.6469% ( 10) 00:09:28.853 38532.010 - 38742.567: 99.7175% ( 8) 00:09:28.853 38742.567 - 38953.124: 99.7881% ( 8) 00:09:28.853 38953.124 - 39163.682: 99.8764% ( 10) 00:09:28.853 39163.682 - 39374.239: 99.9559% ( 9) 00:09:28.853 39374.239 - 39584.797: 100.0000% ( 5) 00:09:28.853 00:09:28.853 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:28.853 ============================================================================== 00:09:28.853 Range in us Cumulative IO count 00:09:28.853 5395.534 - 5421.854: 0.0177% ( 2) 00:09:28.853 5421.854 - 5448.173: 0.0353% ( 2) 00:09:28.853 5448.173 - 5474.493: 0.0441% ( 1) 00:09:28.853 5474.493 - 5500.813: 0.0618% ( 2) 00:09:28.853 5500.813 - 5527.133: 0.0794% ( 2) 00:09:28.853 5527.133 - 5553.452: 0.0971% ( 2) 00:09:28.853 5553.452 - 5579.772: 0.1148% ( 2) 00:09:28.853 5579.772 - 5606.092: 0.1324% ( 2) 00:09:28.853 5606.092 - 5632.411: 0.1501% ( 2) 00:09:28.853 5632.411 - 5658.731: 0.1677% ( 2) 00:09:28.853 5658.731 - 5685.051: 0.1854% ( 2) 00:09:28.853 5685.051 - 5711.370: 0.2030% ( 2) 00:09:28.853 5711.370 - 5737.690: 0.2207% ( 2) 00:09:28.853 5737.690 - 5764.010: 0.2383% ( 2) 00:09:28.853 5764.010 - 5790.329: 0.2560% ( 2) 00:09:28.853 5790.329 - 5816.649: 0.2648% ( 1) 00:09:28.853 5816.649 - 5842.969: 0.2825% ( 2) 00:09:28.853 5842.969 - 5869.288: 0.3001% ( 2) 00:09:28.853 5869.288 - 5895.608: 0.3178% ( 2) 00:09:28.853 5895.608 - 5921.928: 0.3355% ( 2) 00:09:28.853 5921.928 - 5948.247: 0.3531% ( 2) 00:09:28.853 5948.247 - 5974.567: 0.3708% ( 2) 00:09:28.853 5974.567 - 6000.887: 0.3884% ( 2) 00:09:28.853 6000.887 - 6027.206: 0.3972% ( 1) 00:09:28.853 6027.206 - 6053.526: 0.4061% ( 1) 00:09:28.853 6079.846 - 6106.165: 0.4237% ( 2) 00:09:28.853 6106.165 - 6132.485: 0.4414% ( 2) 00:09:28.853 6132.485 - 6158.805: 0.4590% ( 2) 00:09:28.853 6158.805 - 6185.124: 0.4767% ( 2) 00:09:28.853 6185.124 - 6211.444: 0.4944% ( 2) 00:09:28.853 6211.444 - 6237.764: 0.5032% ( 1) 00:09:28.853 6237.764 - 6264.084: 0.5208% ( 2) 00:09:28.853 6264.084 - 6290.403: 0.5385% ( 2) 00:09:28.853 6290.403 - 6316.723: 0.5561% ( 2) 00:09:28.853 6316.723 - 6343.043: 0.5650% ( 1) 00:09:28.853 7053.674 - 7106.313: 0.6003% ( 4) 00:09:28.853 7106.313 - 7158.953: 0.6356% ( 4) 00:09:28.853 7158.953 - 7211.592: 0.6709% ( 4) 00:09:28.853 7211.592 - 7264.231: 0.7062% ( 4) 00:09:28.853 7264.231 - 7316.871: 0.7415% ( 4) 00:09:28.853 7316.871 - 7369.510: 0.7768% ( 4) 00:09:28.853 7369.510 - 7422.149: 0.8210% ( 5) 00:09:28.853 7422.149 - 7474.789: 0.8563% ( 4) 00:09:28.853 7474.789 - 7527.428: 0.8828% ( 3) 00:09:28.853 7527.428 - 7580.067: 0.9181% ( 4) 00:09:28.853 7580.067 - 7632.707: 0.9710% ( 6) 00:09:28.853 7632.707 - 7685.346: 1.0770% ( 12) 00:09:28.853 7685.346 - 7737.986: 1.1564% ( 9) 00:09:28.853 7737.986 - 7790.625: 1.3418% ( 21) 00:09:28.853 7790.625 - 7843.264: 1.6243% ( 32) 00:09:28.853 7843.264 - 7895.904: 1.9774% ( 40) 00:09:28.853 7895.904 - 7948.543: 2.3217% ( 39) 00:09:28.853 7948.543 - 8001.182: 2.7366% ( 47) 00:09:28.853 8001.182 - 8053.822: 3.2044% ( 53) 00:09:28.853 8053.822 - 8106.461: 3.6723% ( 53) 00:09:28.853 8106.461 - 8159.100: 4.1931% ( 59) 00:09:28.853 8159.100 - 8211.740: 4.7316% ( 61) 00:09:28.853 8211.740 - 8264.379: 5.2790% ( 62) 00:09:28.853 8264.379 - 8317.018: 5.9145% ( 72) 00:09:28.853 8317.018 - 8369.658: 6.4972% ( 66) 00:09:28.853 8369.658 - 8422.297: 7.2034% ( 80) 00:09:28.853 8422.297 - 8474.937: 7.9449% ( 84) 00:09:28.853 8474.937 - 8527.576: 8.6600% ( 81) 00:09:28.853 8527.576 - 8580.215: 9.3838% ( 82) 00:09:28.853 8580.215 - 8632.855: 10.1254% ( 84) 00:09:28.853 8632.855 - 8685.494: 10.8669% ( 84) 00:09:28.853 8685.494 - 8738.133: 11.5731% ( 80) 00:09:28.853 8738.133 - 8790.773: 12.4029% ( 94) 00:09:28.853 8790.773 - 8843.412: 13.3121% ( 103) 00:09:28.853 8843.412 - 8896.051: 14.2037% ( 101) 00:09:28.853 8896.051 - 8948.691: 15.1306% ( 105) 00:09:28.853 8948.691 - 9001.330: 16.0222% ( 101) 00:09:28.853 9001.330 - 9053.969: 17.0198% ( 113) 00:09:28.853 9053.969 - 9106.609: 17.9820% ( 109) 00:09:28.853 9106.609 - 9159.248: 19.0590% ( 122) 00:09:28.853 9159.248 - 9211.888: 20.2331% ( 133) 00:09:28.853 9211.888 - 9264.527: 21.4954% ( 143) 00:09:28.853 9264.527 - 9317.166: 22.8107% ( 149) 00:09:28.853 9317.166 - 9369.806: 24.1967% ( 157) 00:09:28.853 9369.806 - 9422.445: 25.6444% ( 164) 00:09:28.853 9422.445 - 9475.084: 27.1628% ( 172) 00:09:28.853 9475.084 - 9527.724: 28.6458% ( 168) 00:09:28.853 9527.724 - 9580.363: 30.1201% ( 167) 00:09:28.853 9580.363 - 9633.002: 31.5590% ( 163) 00:09:28.853 9633.002 - 9685.642: 33.0244% ( 166) 00:09:28.853 9685.642 - 9738.281: 34.4368% ( 160) 00:09:28.853 9738.281 - 9790.920: 35.9287% ( 169) 00:09:28.853 9790.920 - 9843.560: 37.3146% ( 157) 00:09:28.854 9843.560 - 9896.199: 38.5681% ( 142) 00:09:28.854 9896.199 - 9948.839: 39.7775% ( 137) 00:09:28.854 9948.839 - 10001.478: 40.9163% ( 129) 00:09:28.854 10001.478 - 10054.117: 42.1169% ( 136) 00:09:28.854 10054.117 - 10106.757: 43.2645% ( 130) 00:09:28.854 10106.757 - 10159.396: 44.3503% ( 123) 00:09:28.854 10159.396 - 10212.035: 45.4891% ( 129) 00:09:28.854 10212.035 - 10264.675: 46.5042% ( 115) 00:09:28.854 10264.675 - 10317.314: 47.4841% ( 111) 00:09:28.854 10317.314 - 10369.953: 48.3934% ( 103) 00:09:28.854 10369.953 - 10422.593: 49.2496% ( 97) 00:09:28.854 10422.593 - 10475.232: 50.0530% ( 91) 00:09:28.854 10475.232 - 10527.871: 50.8828% ( 94) 00:09:28.854 10527.871 - 10580.511: 51.7655% ( 100) 00:09:28.854 10580.511 - 10633.150: 52.6924% ( 105) 00:09:28.854 10633.150 - 10685.790: 53.5752% ( 100) 00:09:28.854 10685.790 - 10738.429: 54.4668% ( 101) 00:09:28.854 10738.429 - 10791.068: 55.3407% ( 99) 00:09:28.854 10791.068 - 10843.708: 56.1617% ( 93) 00:09:28.854 10843.708 - 10896.347: 57.0533% ( 101) 00:09:28.854 10896.347 - 10948.986: 57.8655% ( 92) 00:09:28.854 10948.986 - 11001.626: 58.6776% ( 92) 00:09:28.854 11001.626 - 11054.265: 59.5162% ( 95) 00:09:28.854 11054.265 - 11106.904: 60.3549% ( 95) 00:09:28.854 11106.904 - 11159.544: 61.2200% ( 98) 00:09:28.854 11159.544 - 11212.183: 62.1469% ( 105) 00:09:28.854 11212.183 - 11264.822: 63.0738% ( 105) 00:09:28.854 11264.822 - 11317.462: 64.0537% ( 111) 00:09:28.854 11317.462 - 11370.101: 64.9188% ( 98) 00:09:28.854 11370.101 - 11422.741: 65.8898% ( 110) 00:09:28.854 11422.741 - 11475.380: 66.7373% ( 96) 00:09:28.854 11475.380 - 11528.019: 67.7790% ( 118) 00:09:28.854 11528.019 - 11580.659: 68.7235% ( 107) 00:09:28.854 11580.659 - 11633.298: 69.6857% ( 109) 00:09:28.854 11633.298 - 11685.937: 70.5862% ( 102) 00:09:28.854 11685.937 - 11738.577: 71.3895% ( 91) 00:09:28.854 11738.577 - 11791.216: 72.1487% ( 86) 00:09:28.854 11791.216 - 11843.855: 72.8549% ( 80) 00:09:28.854 11843.855 - 11896.495: 73.6052% ( 85) 00:09:28.854 11896.495 - 11949.134: 74.3379% ( 83) 00:09:28.854 11949.134 - 12001.773: 75.0441% ( 80) 00:09:28.854 12001.773 - 12054.413: 75.7945% ( 85) 00:09:28.854 12054.413 - 12107.052: 76.5007% ( 80) 00:09:28.854 12107.052 - 12159.692: 77.1804% ( 77) 00:09:28.854 12159.692 - 12212.331: 77.8160% ( 72) 00:09:28.854 12212.331 - 12264.970: 78.4251% ( 69) 00:09:28.854 12264.970 - 12317.610: 78.9636% ( 61) 00:09:28.854 12317.610 - 12370.249: 79.5021% ( 61) 00:09:28.854 12370.249 - 12422.888: 79.9612% ( 52) 00:09:28.854 12422.888 - 12475.528: 80.4643% ( 57) 00:09:28.854 12475.528 - 12528.167: 80.8792% ( 47) 00:09:28.854 12528.167 - 12580.806: 81.2588% ( 43) 00:09:28.854 12580.806 - 12633.446: 81.6031% ( 39) 00:09:28.854 12633.446 - 12686.085: 81.8768% ( 31) 00:09:28.854 12686.085 - 12738.724: 82.1416% ( 30) 00:09:28.854 12738.724 - 12791.364: 82.3888% ( 28) 00:09:28.854 12791.364 - 12844.003: 82.5830% ( 22) 00:09:28.854 12844.003 - 12896.643: 82.8213% ( 27) 00:09:28.854 12896.643 - 12949.282: 83.0244% ( 23) 00:09:28.854 12949.282 - 13001.921: 83.2715% ( 28) 00:09:28.854 13001.921 - 13054.561: 83.5099% ( 27) 00:09:28.854 13054.561 - 13107.200: 83.7571% ( 28) 00:09:28.854 13107.200 - 13159.839: 83.9689% ( 24) 00:09:28.854 13159.839 - 13212.479: 84.1984% ( 26) 00:09:28.854 13212.479 - 13265.118: 84.4191% ( 25) 00:09:28.854 13265.118 - 13317.757: 84.6222% ( 23) 00:09:28.854 13317.757 - 13370.397: 84.8340% ( 24) 00:09:28.854 13370.397 - 13423.036: 85.0106% ( 20) 00:09:28.854 13423.036 - 13475.676: 85.1430% ( 15) 00:09:28.854 13475.676 - 13580.954: 85.4608% ( 36) 00:09:28.854 13580.954 - 13686.233: 85.8227% ( 41) 00:09:28.854 13686.233 - 13791.512: 86.2023% ( 43) 00:09:28.854 13791.512 - 13896.790: 86.6614% ( 52) 00:09:28.854 13896.790 - 14002.069: 87.1469% ( 55) 00:09:28.854 14002.069 - 14107.348: 87.6766% ( 60) 00:09:28.854 14107.348 - 14212.627: 88.1444% ( 53) 00:09:28.854 14212.627 - 14317.905: 88.5505% ( 46) 00:09:28.854 14317.905 - 14423.184: 89.0184% ( 53) 00:09:28.854 14423.184 - 14528.463: 89.4862% ( 53) 00:09:28.854 14528.463 - 14633.741: 89.9276% ( 50) 00:09:28.854 14633.741 - 14739.020: 90.3513% ( 48) 00:09:28.854 14739.020 - 14844.299: 90.6427% ( 33) 00:09:28.854 14844.299 - 14949.578: 90.8810% ( 27) 00:09:28.854 14949.578 - 15054.856: 91.0576% ( 20) 00:09:28.854 15054.856 - 15160.135: 91.2518% ( 22) 00:09:28.854 15160.135 - 15265.414: 91.4460% ( 22) 00:09:28.854 15265.414 - 15370.692: 91.6314% ( 21) 00:09:28.854 15370.692 - 15475.971: 91.8344% ( 23) 00:09:28.854 15475.971 - 15581.250: 92.0463% ( 24) 00:09:28.854 15581.250 - 15686.529: 92.2581% ( 24) 00:09:28.854 15686.529 - 15791.807: 92.4523% ( 22) 00:09:28.854 15791.807 - 15897.086: 92.6289% ( 20) 00:09:28.854 15897.086 - 16002.365: 92.7790% ( 17) 00:09:28.854 16002.365 - 16107.643: 92.9555% ( 20) 00:09:28.854 16107.643 - 16212.922: 93.1850% ( 26) 00:09:28.854 16212.922 - 16318.201: 93.3616% ( 20) 00:09:28.854 16318.201 - 16423.480: 93.5823% ( 25) 00:09:28.854 16423.480 - 16528.758: 93.8030% ( 25) 00:09:28.854 16528.758 - 16634.037: 94.0237% ( 25) 00:09:28.854 16634.037 - 16739.316: 94.2267% ( 23) 00:09:28.854 16739.316 - 16844.594: 94.3944% ( 19) 00:09:28.854 16844.594 - 16949.873: 94.5180% ( 14) 00:09:28.854 16949.873 - 17055.152: 94.6769% ( 18) 00:09:28.854 17055.152 - 17160.431: 94.8446% ( 19) 00:09:28.854 17160.431 - 17265.709: 95.0212% ( 20) 00:09:28.854 17265.709 - 17370.988: 95.1624% ( 16) 00:09:28.854 17370.988 - 17476.267: 95.3302% ( 19) 00:09:28.854 17476.267 - 17581.545: 95.4273% ( 11) 00:09:28.854 17581.545 - 17686.824: 95.5862% ( 18) 00:09:28.854 17686.824 - 17792.103: 95.7274% ( 16) 00:09:28.854 17792.103 - 17897.382: 95.8863% ( 18) 00:09:28.854 17897.382 - 18002.660: 96.0629% ( 20) 00:09:28.854 18002.660 - 18107.939: 96.2571% ( 22) 00:09:28.854 18107.939 - 18213.218: 96.4424% ( 21) 00:09:28.854 18213.218 - 18318.496: 96.5749% ( 15) 00:09:28.854 18318.496 - 18423.775: 96.7073% ( 15) 00:09:28.854 18423.775 - 18529.054: 96.8662% ( 18) 00:09:28.854 18529.054 - 18634.333: 97.0339% ( 19) 00:09:28.854 18634.333 - 18739.611: 97.2281% ( 22) 00:09:28.854 18739.611 - 18844.890: 97.3958% ( 19) 00:09:28.854 18844.890 - 18950.169: 97.5547% ( 18) 00:09:28.854 18950.169 - 19055.447: 97.7048% ( 17) 00:09:28.854 19055.447 - 19160.726: 97.8460% ( 16) 00:09:28.854 19160.726 - 19266.005: 97.9608% ( 13) 00:09:28.854 19266.005 - 19371.284: 98.0403% ( 9) 00:09:28.854 19371.284 - 19476.562: 98.0844% ( 5) 00:09:28.854 19476.562 - 19581.841: 98.1727% ( 10) 00:09:28.854 19581.841 - 19687.120: 98.2433% ( 8) 00:09:28.854 19687.120 - 19792.398: 98.3227% ( 9) 00:09:28.854 19792.398 - 19897.677: 98.4022% ( 9) 00:09:28.854 19897.677 - 20002.956: 98.4816% ( 9) 00:09:28.854 20002.956 - 20108.235: 98.5611% ( 9) 00:09:28.854 20108.235 - 20213.513: 98.6317% ( 8) 00:09:28.854 20213.513 - 20318.792: 98.6758% ( 5) 00:09:28.854 20318.792 - 20424.071: 98.7112% ( 4) 00:09:28.854 20424.071 - 20529.349: 98.7553% ( 5) 00:09:28.854 20529.349 - 20634.628: 98.7994% ( 5) 00:09:28.854 20634.628 - 20739.907: 98.8436% ( 5) 00:09:28.854 20739.907 - 20845.186: 98.8701% ( 3) 00:09:28.854 30951.942 - 31162.500: 98.9319% ( 7) 00:09:28.854 31162.500 - 31373.057: 99.0290% ( 11) 00:09:28.854 31373.057 - 31583.614: 99.1084% ( 9) 00:09:28.854 31583.614 - 31794.172: 99.1967% ( 10) 00:09:28.854 31794.172 - 32004.729: 99.2938% ( 11) 00:09:28.855 32004.729 - 32215.287: 99.3821% ( 10) 00:09:28.855 32215.287 - 32425.844: 99.4350% ( 6) 00:09:28.855 37058.108 - 37268.665: 99.5056% ( 8) 00:09:28.855 37268.665 - 37479.222: 99.5763% ( 8) 00:09:28.855 37479.222 - 37689.780: 99.6557% ( 9) 00:09:28.855 37689.780 - 37900.337: 99.7352% ( 9) 00:09:28.855 37900.337 - 38110.895: 99.8146% ( 9) 00:09:28.855 38110.895 - 38321.452: 99.8764% ( 7) 00:09:28.855 38321.452 - 38532.010: 99.9470% ( 8) 00:09:28.855 38532.010 - 38742.567: 100.0000% ( 6) 00:09:28.855 00:09:28.855 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:28.855 ============================================================================== 00:09:28.855 Range in us Cumulative IO count 00:09:28.855 5211.296 - 5237.616: 0.0177% ( 2) 00:09:28.855 5237.616 - 5263.936: 0.0353% ( 2) 00:09:28.855 5263.936 - 5290.255: 0.0530% ( 2) 00:09:28.855 5290.255 - 5316.575: 0.0706% ( 2) 00:09:28.855 5316.575 - 5342.895: 0.0883% ( 2) 00:09:28.855 5342.895 - 5369.214: 0.0971% ( 1) 00:09:28.855 5369.214 - 5395.534: 0.1148% ( 2) 00:09:28.855 5395.534 - 5421.854: 0.1324% ( 2) 00:09:28.855 5421.854 - 5448.173: 0.1501% ( 2) 00:09:28.855 5448.173 - 5474.493: 0.1677% ( 2) 00:09:28.855 5474.493 - 5500.813: 0.1854% ( 2) 00:09:28.855 5500.813 - 5527.133: 0.2030% ( 2) 00:09:28.855 5527.133 - 5553.452: 0.2207% ( 2) 00:09:28.855 5553.452 - 5579.772: 0.2383% ( 2) 00:09:28.855 5579.772 - 5606.092: 0.2560% ( 2) 00:09:28.855 5606.092 - 5632.411: 0.2737% ( 2) 00:09:28.855 5632.411 - 5658.731: 0.2825% ( 1) 00:09:28.855 5658.731 - 5685.051: 0.3001% ( 2) 00:09:28.855 5685.051 - 5711.370: 0.3178% ( 2) 00:09:28.855 5711.370 - 5737.690: 0.3355% ( 2) 00:09:28.855 5737.690 - 5764.010: 0.3443% ( 1) 00:09:28.855 5764.010 - 5790.329: 0.3619% ( 2) 00:09:28.855 5790.329 - 5816.649: 0.3796% ( 2) 00:09:28.855 5816.649 - 5842.969: 0.3972% ( 2) 00:09:28.855 5842.969 - 5869.288: 0.4149% ( 2) 00:09:28.855 5869.288 - 5895.608: 0.4326% ( 2) 00:09:28.855 5895.608 - 5921.928: 0.4414% ( 1) 00:09:28.855 5921.928 - 5948.247: 0.4590% ( 2) 00:09:28.855 5948.247 - 5974.567: 0.4767% ( 2) 00:09:28.855 5974.567 - 6000.887: 0.4855% ( 1) 00:09:28.855 6000.887 - 6027.206: 0.5032% ( 2) 00:09:28.855 6027.206 - 6053.526: 0.5120% ( 1) 00:09:28.855 6053.526 - 6079.846: 0.5297% ( 2) 00:09:28.855 6079.846 - 6106.165: 0.5473% ( 2) 00:09:28.855 6106.165 - 6132.485: 0.5650% ( 2) 00:09:28.855 6790.477 - 6843.116: 0.5826% ( 2) 00:09:28.855 6843.116 - 6895.756: 0.6179% ( 4) 00:09:28.855 6895.756 - 6948.395: 0.6444% ( 3) 00:09:28.855 6948.395 - 7001.035: 0.6797% ( 4) 00:09:28.855 7001.035 - 7053.674: 0.7150% ( 4) 00:09:28.855 7053.674 - 7106.313: 0.7415% ( 3) 00:09:28.855 7106.313 - 7158.953: 0.7680% ( 3) 00:09:28.855 7158.953 - 7211.592: 0.7857% ( 2) 00:09:28.855 7211.592 - 7264.231: 0.8210% ( 4) 00:09:28.855 7264.231 - 7316.871: 0.8563% ( 4) 00:09:28.855 7316.871 - 7369.510: 0.8828% ( 3) 00:09:28.855 7369.510 - 7422.149: 0.9181% ( 4) 00:09:28.855 7422.149 - 7474.789: 0.9446% ( 3) 00:09:28.855 7474.789 - 7527.428: 0.9799% ( 4) 00:09:28.855 7527.428 - 7580.067: 1.0064% ( 3) 00:09:28.855 7580.067 - 7632.707: 1.0770% ( 8) 00:09:28.855 7632.707 - 7685.346: 1.1829% ( 12) 00:09:28.855 7685.346 - 7737.986: 1.2977% ( 13) 00:09:28.855 7737.986 - 7790.625: 1.4654% ( 19) 00:09:28.855 7790.625 - 7843.264: 1.6684% ( 23) 00:09:28.855 7843.264 - 7895.904: 1.9509% ( 32) 00:09:28.855 7895.904 - 7948.543: 2.3305% ( 43) 00:09:28.855 7948.543 - 8001.182: 2.7366% ( 46) 00:09:28.855 8001.182 - 8053.822: 3.1603% ( 48) 00:09:28.855 8053.822 - 8106.461: 3.6547% ( 56) 00:09:28.855 8106.461 - 8159.100: 4.1578% ( 57) 00:09:28.855 8159.100 - 8211.740: 4.7140% ( 63) 00:09:28.855 8211.740 - 8264.379: 5.3143% ( 68) 00:09:28.855 8264.379 - 8317.018: 5.9322% ( 70) 00:09:28.855 8317.018 - 8369.658: 6.5943% ( 75) 00:09:28.855 8369.658 - 8422.297: 7.2652% ( 76) 00:09:28.855 8422.297 - 8474.937: 7.9537% ( 78) 00:09:28.855 8474.937 - 8527.576: 8.6070% ( 74) 00:09:28.855 8527.576 - 8580.215: 9.3573% ( 85) 00:09:28.855 8580.215 - 8632.855: 10.0989% ( 84) 00:09:28.855 8632.855 - 8685.494: 10.8316% ( 83) 00:09:28.855 8685.494 - 8738.133: 11.6437% ( 92) 00:09:28.855 8738.133 - 8790.773: 12.4823% ( 95) 00:09:28.855 8790.773 - 8843.412: 13.2857% ( 91) 00:09:28.855 8843.412 - 8896.051: 14.1773% ( 101) 00:09:28.855 8896.051 - 8948.691: 15.0689% ( 101) 00:09:28.855 8948.691 - 9001.330: 16.0311% ( 109) 00:09:28.855 9001.330 - 9053.969: 17.0463% ( 115) 00:09:28.855 9053.969 - 9106.609: 18.1497% ( 125) 00:09:28.855 9106.609 - 9159.248: 19.2885% ( 129) 00:09:28.855 9159.248 - 9211.888: 20.5685% ( 145) 00:09:28.855 9211.888 - 9264.527: 21.7867% ( 138) 00:09:28.855 9264.527 - 9317.166: 23.0756% ( 146) 00:09:28.855 9317.166 - 9369.806: 24.4350% ( 154) 00:09:28.855 9369.806 - 9422.445: 25.9269% ( 169) 00:09:28.855 9422.445 - 9475.084: 27.3746% ( 164) 00:09:28.855 9475.084 - 9527.724: 28.9107% ( 174) 00:09:28.855 9527.724 - 9580.363: 30.3672% ( 165) 00:09:28.855 9580.363 - 9633.002: 31.7532% ( 157) 00:09:28.855 9633.002 - 9685.642: 33.0508% ( 147) 00:09:28.855 9685.642 - 9738.281: 34.4809% ( 162) 00:09:28.855 9738.281 - 9790.920: 35.9463% ( 166) 00:09:28.855 9790.920 - 9843.560: 37.3146% ( 155) 00:09:28.855 9843.560 - 9896.199: 38.5064% ( 135) 00:09:28.855 9896.199 - 9948.839: 39.7334% ( 139) 00:09:28.855 9948.839 - 10001.478: 40.8987% ( 132) 00:09:28.855 10001.478 - 10054.117: 42.0021% ( 125) 00:09:28.855 10054.117 - 10106.757: 43.1232% ( 127) 00:09:28.855 10106.757 - 10159.396: 44.2090% ( 123) 00:09:28.855 10159.396 - 10212.035: 45.1536% ( 107) 00:09:28.855 10212.035 - 10264.675: 46.1070% ( 108) 00:09:28.855 10264.675 - 10317.314: 47.1398% ( 117) 00:09:28.855 10317.314 - 10369.953: 48.1109% ( 110) 00:09:28.855 10369.953 - 10422.593: 49.0113% ( 102) 00:09:28.855 10422.593 - 10475.232: 49.9647% ( 108) 00:09:28.855 10475.232 - 10527.871: 50.9181% ( 108) 00:09:28.855 10527.871 - 10580.511: 51.9156% ( 113) 00:09:28.855 10580.511 - 10633.150: 52.8778% ( 109) 00:09:28.855 10633.150 - 10685.790: 53.8136% ( 106) 00:09:28.855 10685.790 - 10738.429: 54.7228% ( 103) 00:09:28.855 10738.429 - 10791.068: 55.6144% ( 101) 00:09:28.855 10791.068 - 10843.708: 56.4177% ( 91) 00:09:28.855 10843.708 - 10896.347: 57.3270% ( 103) 00:09:28.855 10896.347 - 10948.986: 58.1921% ( 98) 00:09:28.855 10948.986 - 11001.626: 59.0749% ( 100) 00:09:28.855 11001.626 - 11054.265: 59.9576% ( 100) 00:09:28.855 11054.265 - 11106.904: 60.9552% ( 113) 00:09:28.855 11106.904 - 11159.544: 61.8821% ( 105) 00:09:28.855 11159.544 - 11212.183: 62.7825% ( 102) 00:09:28.855 11212.183 - 11264.822: 63.6211% ( 95) 00:09:28.855 11264.822 - 11317.462: 64.5039% ( 100) 00:09:28.855 11317.462 - 11370.101: 65.3778% ( 99) 00:09:28.855 11370.101 - 11422.741: 66.3136% ( 106) 00:09:28.855 11422.741 - 11475.380: 67.1875% ( 99) 00:09:28.855 11475.380 - 11528.019: 68.0879% ( 102) 00:09:28.855 11528.019 - 11580.659: 69.0148% ( 105) 00:09:28.855 11580.659 - 11633.298: 69.8711% ( 97) 00:09:28.855 11633.298 - 11685.937: 70.6303% ( 86) 00:09:28.855 11685.937 - 11738.577: 71.4071% ( 88) 00:09:28.855 11738.577 - 11791.216: 72.1045% ( 79) 00:09:28.855 11791.216 - 11843.855: 72.8196% ( 81) 00:09:28.855 11843.855 - 11896.495: 73.5346% ( 81) 00:09:28.855 11896.495 - 11949.134: 74.2055% ( 76) 00:09:28.855 11949.134 - 12001.773: 74.8411% ( 72) 00:09:28.855 12001.773 - 12054.413: 75.4679% ( 71) 00:09:28.855 12054.413 - 12107.052: 76.0328% ( 64) 00:09:28.855 12107.052 - 12159.692: 76.6331% ( 68) 00:09:28.855 12159.692 - 12212.331: 77.2246% ( 67) 00:09:28.856 12212.331 - 12264.970: 77.8602% ( 72) 00:09:28.856 12264.970 - 12317.610: 78.4869% ( 71) 00:09:28.856 12317.610 - 12370.249: 79.0254% ( 61) 00:09:28.856 12370.249 - 12422.888: 79.5904% ( 64) 00:09:28.856 12422.888 - 12475.528: 80.1465% ( 63) 00:09:28.856 12475.528 - 12528.167: 80.5968% ( 51) 00:09:28.856 12528.167 - 12580.806: 80.9852% ( 44) 00:09:28.856 12580.806 - 12633.446: 81.3559% ( 42) 00:09:28.856 12633.446 - 12686.085: 81.6649% ( 35) 00:09:28.856 12686.085 - 12738.724: 81.9739% ( 35) 00:09:28.856 12738.724 - 12791.364: 82.2652% ( 33) 00:09:28.856 12791.364 - 12844.003: 82.5653% ( 34) 00:09:28.856 12844.003 - 12896.643: 82.8655% ( 34) 00:09:28.856 12896.643 - 12949.282: 83.1656% ( 34) 00:09:28.856 12949.282 - 13001.921: 83.4304% ( 30) 00:09:28.856 13001.921 - 13054.561: 83.6953% ( 30) 00:09:28.856 13054.561 - 13107.200: 83.9689% ( 31) 00:09:28.856 13107.200 - 13159.839: 84.2073% ( 27) 00:09:28.856 13159.839 - 13212.479: 84.4456% ( 27) 00:09:28.856 13212.479 - 13265.118: 84.6663% ( 25) 00:09:28.856 13265.118 - 13317.757: 84.9047% ( 27) 00:09:28.856 13317.757 - 13370.397: 85.1430% ( 27) 00:09:28.856 13370.397 - 13423.036: 85.3284% ( 21) 00:09:28.856 13423.036 - 13475.676: 85.5314% ( 23) 00:09:28.856 13475.676 - 13580.954: 85.9463% ( 47) 00:09:28.856 13580.954 - 13686.233: 86.4230% ( 54) 00:09:28.856 13686.233 - 13791.512: 86.8644% ( 50) 00:09:28.856 13791.512 - 13896.790: 87.3146% ( 51) 00:09:28.856 13896.790 - 14002.069: 87.7737% ( 52) 00:09:28.856 14002.069 - 14107.348: 88.2327% ( 52) 00:09:28.856 14107.348 - 14212.627: 88.6476% ( 47) 00:09:28.856 14212.627 - 14317.905: 89.0713% ( 48) 00:09:28.856 14317.905 - 14423.184: 89.4774% ( 46) 00:09:28.856 14423.184 - 14528.463: 89.8217% ( 39) 00:09:28.856 14528.463 - 14633.741: 90.1306% ( 35) 00:09:28.856 14633.741 - 14739.020: 90.3072% ( 20) 00:09:28.856 14739.020 - 14844.299: 90.4308% ( 14) 00:09:28.856 14844.299 - 14949.578: 90.5897% ( 18) 00:09:28.856 14949.578 - 15054.856: 90.7574% ( 19) 00:09:28.856 15054.856 - 15160.135: 90.9075% ( 17) 00:09:28.856 15160.135 - 15265.414: 91.0222% ( 13) 00:09:28.856 15265.414 - 15370.692: 91.1811% ( 18) 00:09:28.856 15370.692 - 15475.971: 91.3312% ( 17) 00:09:28.856 15475.971 - 15581.250: 91.4989% ( 19) 00:09:28.856 15581.250 - 15686.529: 91.7726% ( 31) 00:09:28.856 15686.529 - 15791.807: 92.0904% ( 36) 00:09:28.856 15791.807 - 15897.086: 92.3817% ( 33) 00:09:28.856 15897.086 - 16002.365: 92.6995% ( 36) 00:09:28.856 16002.365 - 16107.643: 92.9290% ( 26) 00:09:28.856 16107.643 - 16212.922: 93.1497% ( 25) 00:09:28.856 16212.922 - 16318.201: 93.3792% ( 26) 00:09:28.856 16318.201 - 16423.480: 93.6617% ( 32) 00:09:28.856 16423.480 - 16528.758: 93.9177% ( 29) 00:09:28.856 16528.758 - 16634.037: 94.1119% ( 22) 00:09:28.856 16634.037 - 16739.316: 94.3150% ( 23) 00:09:28.856 16739.316 - 16844.594: 94.5180% ( 23) 00:09:28.856 16844.594 - 16949.873: 94.6946% ( 20) 00:09:28.856 16949.873 - 17055.152: 94.8711% ( 20) 00:09:28.856 17055.152 - 17160.431: 95.0388% ( 19) 00:09:28.856 17160.431 - 17265.709: 95.2242% ( 21) 00:09:28.856 17265.709 - 17370.988: 95.3919% ( 19) 00:09:28.856 17370.988 - 17476.267: 95.5332% ( 16) 00:09:28.856 17476.267 - 17581.545: 95.6744% ( 16) 00:09:28.856 17581.545 - 17686.824: 95.8069% ( 15) 00:09:28.856 17686.824 - 17792.103: 95.9481% ( 16) 00:09:28.856 17792.103 - 17897.382: 96.0982% ( 17) 00:09:28.856 17897.382 - 18002.660: 96.2394% ( 16) 00:09:28.856 18002.660 - 18107.939: 96.3542% ( 13) 00:09:28.856 18107.939 - 18213.218: 96.4778% ( 14) 00:09:28.856 18213.218 - 18318.496: 96.6367% ( 18) 00:09:28.856 18318.496 - 18423.775: 96.8044% ( 19) 00:09:28.856 18423.775 - 18529.054: 96.9721% ( 19) 00:09:28.856 18529.054 - 18634.333: 97.1222% ( 17) 00:09:28.856 18634.333 - 18739.611: 97.2105% ( 10) 00:09:28.856 18739.611 - 18844.890: 97.2899% ( 9) 00:09:28.856 18844.890 - 18950.169: 97.3782% ( 10) 00:09:28.856 18950.169 - 19055.447: 97.4488% ( 8) 00:09:28.856 19055.447 - 19160.726: 97.5459% ( 11) 00:09:28.856 19160.726 - 19266.005: 97.6783% ( 15) 00:09:28.856 19266.005 - 19371.284: 97.7578% ( 9) 00:09:28.856 19371.284 - 19476.562: 97.8460% ( 10) 00:09:28.856 19476.562 - 19581.841: 97.9167% ( 8) 00:09:28.856 19581.841 - 19687.120: 97.9520% ( 4) 00:09:28.856 19687.120 - 19792.398: 97.9873% ( 4) 00:09:28.856 19792.398 - 19897.677: 98.0667% ( 9) 00:09:28.856 19897.677 - 20002.956: 98.1462% ( 9) 00:09:28.856 20002.956 - 20108.235: 98.2256% ( 9) 00:09:28.856 20108.235 - 20213.513: 98.2963% ( 8) 00:09:28.856 20213.513 - 20318.792: 98.3757% ( 9) 00:09:28.856 20318.792 - 20424.071: 98.4640% ( 10) 00:09:28.856 20424.071 - 20529.349: 98.5434% ( 9) 00:09:28.856 20529.349 - 20634.628: 98.6317% ( 10) 00:09:28.856 20634.628 - 20739.907: 98.7112% ( 9) 00:09:28.856 20739.907 - 20845.186: 98.7553% ( 5) 00:09:28.856 20845.186 - 20950.464: 98.7994% ( 5) 00:09:28.856 20950.464 - 21055.743: 98.8436% ( 5) 00:09:28.856 21055.743 - 21161.022: 98.8701% ( 3) 00:09:28.856 30109.712 - 30320.270: 98.9054% ( 4) 00:09:28.856 30320.270 - 30530.827: 98.9936% ( 10) 00:09:28.856 30530.827 - 30741.385: 99.0819% ( 10) 00:09:28.856 30741.385 - 30951.942: 99.1702% ( 10) 00:09:28.856 30951.942 - 31162.500: 99.2408% ( 8) 00:09:28.856 31162.500 - 31373.057: 99.3203% ( 9) 00:09:28.856 31373.057 - 31583.614: 99.4085% ( 10) 00:09:28.856 31583.614 - 31794.172: 99.4350% ( 3) 00:09:28.856 36215.878 - 36426.435: 99.4615% ( 3) 00:09:28.856 36426.435 - 36636.993: 99.5233% ( 7) 00:09:28.856 36636.993 - 36847.550: 99.5939% ( 8) 00:09:28.856 36847.550 - 37058.108: 99.6822% ( 10) 00:09:28.856 37058.108 - 37268.665: 99.7528% ( 8) 00:09:28.856 37268.665 - 37479.222: 99.8411% ( 10) 00:09:28.856 37479.222 - 37689.780: 99.9206% ( 9) 00:09:28.856 37689.780 - 37900.337: 100.0000% ( 9) 00:09:28.856 00:09:28.856 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:28.856 ============================================================================== 00:09:28.856 Range in us Cumulative IO count 00:09:28.856 4974.419 - 5000.739: 0.0088% ( 1) 00:09:28.856 5000.739 - 5027.059: 0.0265% ( 2) 00:09:28.856 5027.059 - 5053.378: 0.0353% ( 1) 00:09:28.856 5053.378 - 5079.698: 0.0530% ( 2) 00:09:28.856 5079.698 - 5106.018: 0.0706% ( 2) 00:09:28.856 5106.018 - 5132.337: 0.0883% ( 2) 00:09:28.856 5132.337 - 5158.657: 0.1059% ( 2) 00:09:28.856 5158.657 - 5184.977: 0.1236% ( 2) 00:09:28.856 5184.977 - 5211.296: 0.1412% ( 2) 00:09:28.856 5211.296 - 5237.616: 0.1501% ( 1) 00:09:28.856 5237.616 - 5263.936: 0.1677% ( 2) 00:09:28.856 5263.936 - 5290.255: 0.1854% ( 2) 00:09:28.856 5290.255 - 5316.575: 0.2030% ( 2) 00:09:28.856 5316.575 - 5342.895: 0.2207% ( 2) 00:09:28.856 5342.895 - 5369.214: 0.2383% ( 2) 00:09:28.856 5369.214 - 5395.534: 0.2560% ( 2) 00:09:28.856 5395.534 - 5421.854: 0.2737% ( 2) 00:09:28.856 5421.854 - 5448.173: 0.2913% ( 2) 00:09:28.856 5448.173 - 5474.493: 0.3001% ( 1) 00:09:28.856 5474.493 - 5500.813: 0.3178% ( 2) 00:09:28.856 5500.813 - 5527.133: 0.3266% ( 1) 00:09:28.856 5527.133 - 5553.452: 0.3443% ( 2) 00:09:28.856 5553.452 - 5579.772: 0.3619% ( 2) 00:09:28.856 5579.772 - 5606.092: 0.3796% ( 2) 00:09:28.856 5606.092 - 5632.411: 0.3972% ( 2) 00:09:28.856 5632.411 - 5658.731: 0.4149% ( 2) 00:09:28.856 5658.731 - 5685.051: 0.4237% ( 1) 00:09:28.856 5685.051 - 5711.370: 0.4414% ( 2) 00:09:28.856 5711.370 - 5737.690: 0.4590% ( 2) 00:09:28.856 5737.690 - 5764.010: 0.4767% ( 2) 00:09:28.856 5764.010 - 5790.329: 0.4944% ( 2) 00:09:28.856 5790.329 - 5816.649: 0.5120% ( 2) 00:09:28.857 5816.649 - 5842.969: 0.5208% ( 1) 00:09:28.857 5842.969 - 5869.288: 0.5385% ( 2) 00:09:28.857 5869.288 - 5895.608: 0.5561% ( 2) 00:09:28.857 5895.608 - 5921.928: 0.5650% ( 1) 00:09:28.857 6685.198 - 6711.518: 0.5738% ( 1) 00:09:28.857 6711.518 - 6737.838: 0.5826% ( 1) 00:09:28.857 6737.838 - 6790.477: 0.6179% ( 4) 00:09:28.857 6790.477 - 6843.116: 0.6532% ( 4) 00:09:28.857 6843.116 - 6895.756: 0.6886% ( 4) 00:09:28.857 6895.756 - 6948.395: 0.7150% ( 3) 00:09:28.857 6948.395 - 7001.035: 0.7504% ( 4) 00:09:28.857 7001.035 - 7053.674: 0.7857% ( 4) 00:09:28.857 7053.674 - 7106.313: 0.8210% ( 4) 00:09:28.857 7106.313 - 7158.953: 0.8563% ( 4) 00:09:28.857 7158.953 - 7211.592: 0.8916% ( 4) 00:09:28.857 7211.592 - 7264.231: 0.9269% ( 4) 00:09:28.857 7264.231 - 7316.871: 0.9622% ( 4) 00:09:28.857 7316.871 - 7369.510: 0.9975% ( 4) 00:09:28.857 7369.510 - 7422.149: 1.0240% ( 3) 00:09:28.857 7422.149 - 7474.789: 1.0593% ( 4) 00:09:28.857 7474.789 - 7527.428: 1.0946% ( 4) 00:09:28.857 7527.428 - 7580.067: 1.1211% ( 3) 00:09:28.857 7580.067 - 7632.707: 1.1476% ( 3) 00:09:28.857 7632.707 - 7685.346: 1.2006% ( 6) 00:09:28.857 7685.346 - 7737.986: 1.3153% ( 13) 00:09:28.857 7737.986 - 7790.625: 1.4654% ( 17) 00:09:28.857 7790.625 - 7843.264: 1.6949% ( 26) 00:09:28.857 7843.264 - 7895.904: 1.9597% ( 30) 00:09:28.857 7895.904 - 7948.543: 2.3040% ( 39) 00:09:28.857 7948.543 - 8001.182: 2.6483% ( 39) 00:09:28.857 8001.182 - 8053.822: 3.1427% ( 56) 00:09:28.857 8053.822 - 8106.461: 3.6282% ( 55) 00:09:28.857 8106.461 - 8159.100: 4.2020% ( 65) 00:09:28.857 8159.100 - 8211.740: 4.7758% ( 65) 00:09:28.857 8211.740 - 8264.379: 5.4025% ( 71) 00:09:28.857 8264.379 - 8317.018: 6.0028% ( 68) 00:09:28.857 8317.018 - 8369.658: 6.6826% ( 77) 00:09:28.857 8369.658 - 8422.297: 7.3888% ( 80) 00:09:28.857 8422.297 - 8474.937: 8.0685% ( 77) 00:09:28.857 8474.937 - 8527.576: 8.7394% ( 76) 00:09:28.857 8527.576 - 8580.215: 9.4280% ( 78) 00:09:28.857 8580.215 - 8632.855: 10.0459% ( 70) 00:09:28.857 8632.855 - 8685.494: 10.7609% ( 81) 00:09:28.857 8685.494 - 8738.133: 11.5907% ( 94) 00:09:28.857 8738.133 - 8790.773: 12.4559% ( 98) 00:09:28.857 8790.773 - 8843.412: 13.3916% ( 106) 00:09:28.857 8843.412 - 8896.051: 14.3715% ( 111) 00:09:28.857 8896.051 - 8948.691: 15.4043% ( 117) 00:09:28.857 8948.691 - 9001.330: 16.4195% ( 115) 00:09:28.857 9001.330 - 9053.969: 17.4347% ( 115) 00:09:28.857 9053.969 - 9106.609: 18.5028% ( 121) 00:09:28.857 9106.609 - 9159.248: 19.6681% ( 132) 00:09:28.857 9159.248 - 9211.888: 21.0187% ( 153) 00:09:28.857 9211.888 - 9264.527: 22.3252% ( 148) 00:09:28.857 9264.527 - 9317.166: 23.6317% ( 148) 00:09:28.857 9317.166 - 9369.806: 24.9647% ( 151) 00:09:28.857 9369.806 - 9422.445: 26.3506% ( 157) 00:09:28.857 9422.445 - 9475.084: 27.7984% ( 164) 00:09:28.857 9475.084 - 9527.724: 29.2020% ( 159) 00:09:28.857 9527.724 - 9580.363: 30.7292% ( 173) 00:09:28.857 9580.363 - 9633.002: 32.1063% ( 156) 00:09:28.857 9633.002 - 9685.642: 33.4569% ( 153) 00:09:28.857 9685.642 - 9738.281: 34.8429% ( 157) 00:09:28.857 9738.281 - 9790.920: 36.1229% ( 145) 00:09:28.857 9790.920 - 9843.560: 37.3499% ( 139) 00:09:28.857 9843.560 - 9896.199: 38.5505% ( 136) 00:09:28.857 9896.199 - 9948.839: 39.7511% ( 136) 00:09:28.857 9948.839 - 10001.478: 40.8457% ( 124) 00:09:28.857 10001.478 - 10054.117: 41.9492% ( 125) 00:09:28.857 10054.117 - 10106.757: 42.9114% ( 109) 00:09:28.857 10106.757 - 10159.396: 43.7941% ( 100) 00:09:28.857 10159.396 - 10212.035: 44.6857% ( 101) 00:09:28.857 10212.035 - 10264.675: 45.6215% ( 106) 00:09:28.857 10264.675 - 10317.314: 46.5307% ( 103) 00:09:28.857 10317.314 - 10369.953: 47.4929% ( 109) 00:09:28.857 10369.953 - 10422.593: 48.4463% ( 108) 00:09:28.857 10422.593 - 10475.232: 49.4703% ( 116) 00:09:28.857 10475.232 - 10527.871: 50.5738% ( 125) 00:09:28.857 10527.871 - 10580.511: 51.5184% ( 107) 00:09:28.857 10580.511 - 10633.150: 52.4982% ( 111) 00:09:28.857 10633.150 - 10685.790: 53.4075% ( 103) 00:09:28.857 10685.790 - 10738.429: 54.3256% ( 104) 00:09:28.857 10738.429 - 10791.068: 55.2436% ( 104) 00:09:28.857 10791.068 - 10843.708: 56.2323% ( 112) 00:09:28.857 10843.708 - 10896.347: 57.2387% ( 114) 00:09:28.857 10896.347 - 10948.986: 58.1833% ( 107) 00:09:28.857 10948.986 - 11001.626: 59.2161% ( 117) 00:09:28.857 11001.626 - 11054.265: 60.2578% ( 118) 00:09:28.857 11054.265 - 11106.904: 61.2730% ( 115) 00:09:28.857 11106.904 - 11159.544: 62.3411% ( 121) 00:09:28.857 11159.544 - 11212.183: 63.3475% ( 114) 00:09:28.857 11212.183 - 11264.822: 64.2126% ( 98) 00:09:28.857 11264.822 - 11317.462: 65.0953% ( 100) 00:09:28.857 11317.462 - 11370.101: 65.9163% ( 93) 00:09:28.857 11370.101 - 11422.741: 66.8609% ( 107) 00:09:28.857 11422.741 - 11475.380: 67.6730% ( 92) 00:09:28.857 11475.380 - 11528.019: 68.5293% ( 97) 00:09:28.857 11528.019 - 11580.659: 69.3326% ( 91) 00:09:28.857 11580.659 - 11633.298: 70.0830% ( 85) 00:09:28.857 11633.298 - 11685.937: 70.8598% ( 88) 00:09:28.857 11685.937 - 11738.577: 71.6102% ( 85) 00:09:28.857 11738.577 - 11791.216: 72.3958% ( 89) 00:09:28.857 11791.216 - 11843.855: 73.1462% ( 85) 00:09:28.857 11843.855 - 11896.495: 73.7730% ( 71) 00:09:28.857 11896.495 - 11949.134: 74.4262% ( 74) 00:09:28.857 11949.134 - 12001.773: 75.0706% ( 73) 00:09:28.857 12001.773 - 12054.413: 75.6974% ( 71) 00:09:28.857 12054.413 - 12107.052: 76.3242% ( 71) 00:09:28.857 12107.052 - 12159.692: 76.9068% ( 66) 00:09:28.857 12159.692 - 12212.331: 77.4718% ( 64) 00:09:28.857 12212.331 - 12264.970: 78.0632% ( 67) 00:09:28.857 12264.970 - 12317.610: 78.5929% ( 60) 00:09:28.857 12317.610 - 12370.249: 79.1667% ( 65) 00:09:28.857 12370.249 - 12422.888: 79.6787% ( 58) 00:09:28.857 12422.888 - 12475.528: 80.0847% ( 46) 00:09:28.857 12475.528 - 12528.167: 80.5173% ( 49) 00:09:28.857 12528.167 - 12580.806: 80.8881% ( 42) 00:09:28.857 12580.806 - 12633.446: 81.2765% ( 44) 00:09:28.857 12633.446 - 12686.085: 81.6737% ( 45) 00:09:28.857 12686.085 - 12738.724: 82.0445% ( 42) 00:09:28.857 12738.724 - 12791.364: 82.3888% ( 39) 00:09:28.857 12791.364 - 12844.003: 82.6889% ( 34) 00:09:28.857 12844.003 - 12896.643: 83.0332% ( 39) 00:09:28.857 12896.643 - 12949.282: 83.3333% ( 34) 00:09:28.857 12949.282 - 13001.921: 83.6423% ( 35) 00:09:28.857 13001.921 - 13054.561: 83.9248% ( 32) 00:09:28.857 13054.561 - 13107.200: 84.1808% ( 29) 00:09:28.857 13107.200 - 13159.839: 84.3838% ( 23) 00:09:28.857 13159.839 - 13212.479: 84.6310% ( 28) 00:09:28.857 13212.479 - 13265.118: 84.8694% ( 27) 00:09:28.857 13265.118 - 13317.757: 85.0812% ( 24) 00:09:28.857 13317.757 - 13370.397: 85.3019% ( 25) 00:09:28.857 13370.397 - 13423.036: 85.4873% ( 21) 00:09:28.857 13423.036 - 13475.676: 85.6903% ( 23) 00:09:28.857 13475.676 - 13580.954: 86.0876% ( 45) 00:09:28.857 13580.954 - 13686.233: 86.4672% ( 43) 00:09:28.857 13686.233 - 13791.512: 86.9174% ( 51) 00:09:28.857 13791.512 - 13896.790: 87.4029% ( 55) 00:09:28.857 13896.790 - 14002.069: 87.8796% ( 54) 00:09:28.857 14002.069 - 14107.348: 88.3298% ( 51) 00:09:28.857 14107.348 - 14212.627: 88.7359% ( 46) 00:09:28.857 14212.627 - 14317.905: 89.1155% ( 43) 00:09:28.857 14317.905 - 14423.184: 89.4421% ( 37) 00:09:28.857 14423.184 - 14528.463: 89.7687% ( 37) 00:09:28.857 14528.463 - 14633.741: 90.0335% ( 30) 00:09:28.857 14633.741 - 14739.020: 90.3072% ( 31) 00:09:28.857 14739.020 - 14844.299: 90.5544% ( 28) 00:09:28.857 14844.299 - 14949.578: 90.7398% ( 21) 00:09:28.857 14949.578 - 15054.856: 90.9075% ( 19) 00:09:28.857 15054.856 - 15160.135: 91.1017% ( 22) 00:09:28.857 15160.135 - 15265.414: 91.3136% ( 24) 00:09:28.857 15265.414 - 15370.692: 91.5343% ( 25) 00:09:28.857 15370.692 - 15475.971: 91.7020% ( 19) 00:09:28.857 15475.971 - 15581.250: 91.8520% ( 17) 00:09:28.857 15581.250 - 15686.529: 91.9845% ( 15) 00:09:28.857 15686.529 - 15791.807: 92.1081% ( 14) 00:09:28.857 15791.807 - 15897.086: 92.2669% ( 18) 00:09:28.857 15897.086 - 16002.365: 92.4435% ( 20) 00:09:28.857 16002.365 - 16107.643: 92.6642% ( 25) 00:09:28.857 16107.643 - 16212.922: 92.9290% ( 30) 00:09:28.857 16212.922 - 16318.201: 93.2821% ( 40) 00:09:28.857 16318.201 - 16423.480: 93.6264% ( 39) 00:09:28.857 16423.480 - 16528.758: 93.9001% ( 31) 00:09:28.858 16528.758 - 16634.037: 94.1649% ( 30) 00:09:28.858 16634.037 - 16739.316: 94.4650% ( 34) 00:09:28.858 16739.316 - 16844.594: 94.7387% ( 31) 00:09:28.858 16844.594 - 16949.873: 95.0035% ( 30) 00:09:28.858 16949.873 - 17055.152: 95.2242% ( 25) 00:09:28.858 17055.152 - 17160.431: 95.4626% ( 27) 00:09:28.858 17160.431 - 17265.709: 95.6656% ( 23) 00:09:28.858 17265.709 - 17370.988: 95.8422% ( 20) 00:09:28.858 17370.988 - 17476.267: 96.0011% ( 18) 00:09:28.858 17476.267 - 17581.545: 96.1776% ( 20) 00:09:28.858 17581.545 - 17686.824: 96.3277% ( 17) 00:09:28.858 17686.824 - 17792.103: 96.4513% ( 14) 00:09:28.858 17792.103 - 17897.382: 96.5307% ( 9) 00:09:28.858 17897.382 - 18002.660: 96.6013% ( 8) 00:09:28.858 18002.660 - 18107.939: 96.6102% ( 1) 00:09:28.858 18529.054 - 18634.333: 96.6720% ( 7) 00:09:28.858 18634.333 - 18739.611: 96.7602% ( 10) 00:09:28.858 18739.611 - 18844.890: 96.8309% ( 8) 00:09:28.858 18844.890 - 18950.169: 96.9280% ( 11) 00:09:28.858 18950.169 - 19055.447: 96.9898% ( 7) 00:09:28.858 19055.447 - 19160.726: 97.0869% ( 11) 00:09:28.858 19160.726 - 19266.005: 97.1575% ( 8) 00:09:28.858 19266.005 - 19371.284: 97.2458% ( 10) 00:09:28.858 19371.284 - 19476.562: 97.3164% ( 8) 00:09:28.858 19476.562 - 19581.841: 97.3958% ( 9) 00:09:28.858 19581.841 - 19687.120: 97.4665% ( 8) 00:09:28.858 19687.120 - 19792.398: 97.5636% ( 11) 00:09:28.858 19792.398 - 19897.677: 97.6695% ( 12) 00:09:28.858 19897.677 - 20002.956: 97.7578% ( 10) 00:09:28.858 20002.956 - 20108.235: 97.8372% ( 9) 00:09:28.858 20108.235 - 20213.513: 97.9078% ( 8) 00:09:28.858 20213.513 - 20318.792: 97.9873% ( 9) 00:09:28.858 20318.792 - 20424.071: 98.0667% ( 9) 00:09:28.858 20424.071 - 20529.349: 98.1374% ( 8) 00:09:28.858 20529.349 - 20634.628: 98.2256% ( 10) 00:09:28.858 20634.628 - 20739.907: 98.2963% ( 8) 00:09:28.858 20739.907 - 20845.186: 98.3757% ( 9) 00:09:28.858 20845.186 - 20950.464: 98.4552% ( 9) 00:09:28.858 20950.464 - 21055.743: 98.5258% ( 8) 00:09:28.858 21055.743 - 21161.022: 98.5964% ( 8) 00:09:28.858 21161.022 - 21266.300: 98.6758% ( 9) 00:09:28.858 21266.300 - 21371.579: 98.7465% ( 8) 00:09:28.858 21371.579 - 21476.858: 98.8347% ( 10) 00:09:28.858 21476.858 - 21582.137: 98.8701% ( 4) 00:09:28.858 29478.040 - 29688.598: 98.9054% ( 4) 00:09:28.858 29688.598 - 29899.155: 98.9936% ( 10) 00:09:28.858 29899.155 - 30109.712: 99.0819% ( 10) 00:09:28.858 30109.712 - 30320.270: 99.1702% ( 10) 00:09:28.858 30320.270 - 30530.827: 99.2585% ( 10) 00:09:28.858 30530.827 - 30741.385: 99.3556% ( 11) 00:09:28.858 30741.385 - 30951.942: 99.4262% ( 8) 00:09:28.858 30951.942 - 31162.500: 99.4350% ( 1) 00:09:28.858 35584.206 - 35794.763: 99.4880% ( 6) 00:09:28.858 35794.763 - 36005.320: 99.5674% ( 9) 00:09:28.858 36005.320 - 36215.878: 99.6469% ( 9) 00:09:28.858 36215.878 - 36426.435: 99.7175% ( 8) 00:09:28.858 36426.435 - 36636.993: 99.7970% ( 9) 00:09:28.858 36636.993 - 36847.550: 99.8852% ( 10) 00:09:28.858 36847.550 - 37058.108: 99.9647% ( 9) 00:09:28.858 37058.108 - 37268.665: 100.0000% ( 4) 00:09:28.858 00:09:28.858 15:09:27 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:30.235 Initializing NVMe Controllers 00:09:30.235 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:30.235 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:30.235 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:30.235 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:30.235 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:30.235 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:30.235 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:30.235 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:30.235 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:30.235 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:30.235 Initialization complete. Launching workers. 00:09:30.235 ======================================================== 00:09:30.235 Latency(us) 00:09:30.235 Device Information : IOPS MiB/s Average min max 00:09:30.235 PCIE (0000:00:10.0) NSID 1 from core 0: 8090.54 94.81 15832.33 9599.05 42100.05 00:09:30.235 PCIE (0000:00:11.0) NSID 1 from core 0: 8090.54 94.81 15799.44 9932.74 40219.01 00:09:30.235 PCIE (0000:00:13.0) NSID 1 from core 0: 8090.54 94.81 15768.29 9146.07 39768.87 00:09:30.235 PCIE (0000:00:12.0) NSID 1 from core 0: 8090.54 94.81 15736.39 9337.27 38364.91 00:09:30.235 PCIE (0000:00:12.0) NSID 2 from core 0: 8090.54 94.81 15704.89 8428.71 37203.67 00:09:30.235 PCIE (0000:00:12.0) NSID 3 from core 0: 8090.54 94.81 15673.15 7433.49 36151.17 00:09:30.235 ======================================================== 00:09:30.235 Total : 48543.24 568.87 15752.42 7433.49 42100.05 00:09:30.235 00:09:30.235 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:30.235 ================================================================================= 00:09:30.235 1.00000% : 10896.347us 00:09:30.235 10.00000% : 12159.692us 00:09:30.235 25.00000% : 13896.790us 00:09:30.235 50.00000% : 15897.086us 00:09:30.235 75.00000% : 17265.709us 00:09:30.235 90.00000% : 18634.333us 00:09:30.235 95.00000% : 19687.120us 00:09:30.235 98.00000% : 21371.579us 00:09:30.235 99.00000% : 28425.253us 00:09:30.235 99.50000% : 40216.469us 00:09:30.235 99.90000% : 41900.929us 00:09:30.235 99.99000% : 42111.486us 00:09:30.235 99.99900% : 42111.486us 00:09:30.235 99.99990% : 42111.486us 00:09:30.235 99.99999% : 42111.486us 00:09:30.235 00:09:30.235 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:30.235 ================================================================================= 00:09:30.235 1.00000% : 11001.626us 00:09:30.235 10.00000% : 12054.413us 00:09:30.235 25.00000% : 13896.790us 00:09:30.235 50.00000% : 16002.365us 00:09:30.235 75.00000% : 17265.709us 00:09:30.235 90.00000% : 18529.054us 00:09:30.235 95.00000% : 19476.562us 00:09:30.235 98.00000% : 21266.300us 00:09:30.235 99.00000% : 27793.581us 00:09:30.235 99.50000% : 38742.567us 00:09:30.235 99.90000% : 40005.912us 00:09:30.235 99.99000% : 40427.027us 00:09:30.235 99.99900% : 40427.027us 00:09:30.235 99.99990% : 40427.027us 00:09:30.235 99.99999% : 40427.027us 00:09:30.235 00:09:30.235 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:30.235 ================================================================================= 00:09:30.235 1.00000% : 10212.035us 00:09:30.235 10.00000% : 12212.331us 00:09:30.235 25.00000% : 13896.790us 00:09:30.235 50.00000% : 15791.807us 00:09:30.235 75.00000% : 17370.988us 00:09:30.235 90.00000% : 18423.775us 00:09:30.235 95.00000% : 19266.005us 00:09:30.235 98.00000% : 21476.858us 00:09:30.235 99.00000% : 28214.696us 00:09:30.235 99.50000% : 38321.452us 00:09:30.235 99.90000% : 39584.797us 00:09:30.235 99.99000% : 39795.354us 00:09:30.235 99.99900% : 39795.354us 00:09:30.235 99.99990% : 39795.354us 00:09:30.235 99.99999% : 39795.354us 00:09:30.235 00:09:30.235 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:30.235 ================================================================================= 00:09:30.235 1.00000% : 10001.478us 00:09:30.235 10.00000% : 12212.331us 00:09:30.235 25.00000% : 13791.512us 00:09:30.235 50.00000% : 15791.807us 00:09:30.235 75.00000% : 17370.988us 00:09:30.235 90.00000% : 18423.775us 00:09:30.235 95.00000% : 19055.447us 00:09:30.235 98.00000% : 21161.022us 00:09:30.235 99.00000% : 27161.908us 00:09:30.235 99.50000% : 36847.550us 00:09:30.235 99.90000% : 38110.895us 00:09:30.235 99.99000% : 38532.010us 00:09:30.235 99.99900% : 38532.010us 00:09:30.235 99.99990% : 38532.010us 00:09:30.235 99.99999% : 38532.010us 00:09:30.235 00:09:30.235 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:30.235 ================================================================================= 00:09:30.235 1.00000% : 9843.560us 00:09:30.235 10.00000% : 12212.331us 00:09:30.235 25.00000% : 13896.790us 00:09:30.235 50.00000% : 15897.086us 00:09:30.235 75.00000% : 17370.988us 00:09:30.235 90.00000% : 18423.775us 00:09:30.235 95.00000% : 19160.726us 00:09:30.235 98.00000% : 20845.186us 00:09:30.235 99.00000% : 25793.285us 00:09:30.235 99.50000% : 35794.763us 00:09:30.235 99.90000% : 37058.108us 00:09:30.235 99.99000% : 37268.665us 00:09:30.235 99.99900% : 37268.665us 00:09:30.235 99.99990% : 37268.665us 00:09:30.235 99.99999% : 37268.665us 00:09:30.235 00:09:30.235 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:30.235 ================================================================================= 00:09:30.236 1.00000% : 9738.281us 00:09:30.236 10.00000% : 12107.052us 00:09:30.236 25.00000% : 13896.790us 00:09:30.236 50.00000% : 15791.807us 00:09:30.236 75.00000% : 17265.709us 00:09:30.236 90.00000% : 18529.054us 00:09:30.236 95.00000% : 19266.005us 00:09:30.236 98.00000% : 20845.186us 00:09:30.236 99.00000% : 24740.498us 00:09:30.236 99.50000% : 34741.976us 00:09:30.236 99.90000% : 36005.320us 00:09:30.236 99.99000% : 36215.878us 00:09:30.236 99.99900% : 36215.878us 00:09:30.236 99.99990% : 36215.878us 00:09:30.236 99.99999% : 36215.878us 00:09:30.236 00:09:30.236 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:30.236 ============================================================================== 00:09:30.236 Range in us Cumulative IO count 00:09:30.236 9580.363 - 9633.002: 0.1599% ( 13) 00:09:30.236 9633.002 - 9685.642: 0.1845% ( 2) 00:09:30.236 9685.642 - 9738.281: 0.2215% ( 3) 00:09:30.236 9738.281 - 9790.920: 0.2338% ( 1) 00:09:30.236 9790.920 - 9843.560: 0.2830% ( 4) 00:09:30.236 9843.560 - 9896.199: 0.3076% ( 2) 00:09:30.236 9896.199 - 9948.839: 0.3199% ( 1) 00:09:30.236 9948.839 - 10001.478: 0.3568% ( 3) 00:09:30.236 10001.478 - 10054.117: 0.3937% ( 3) 00:09:30.236 10054.117 - 10106.757: 0.4306% ( 3) 00:09:30.236 10106.757 - 10159.396: 0.4552% ( 2) 00:09:30.236 10159.396 - 10212.035: 0.4798% ( 2) 00:09:30.236 10212.035 - 10264.675: 0.5167% ( 3) 00:09:30.236 10264.675 - 10317.314: 0.5536% ( 3) 00:09:30.236 10317.314 - 10369.953: 0.5906% ( 3) 00:09:30.236 10369.953 - 10422.593: 0.6152% ( 2) 00:09:30.236 10422.593 - 10475.232: 0.6521% ( 3) 00:09:30.236 10475.232 - 10527.871: 0.6890% ( 3) 00:09:30.236 10527.871 - 10580.511: 0.7259% ( 3) 00:09:30.236 10580.511 - 10633.150: 0.7505% ( 2) 00:09:30.236 10633.150 - 10685.790: 0.7997% ( 4) 00:09:30.236 10685.790 - 10738.429: 0.8243% ( 2) 00:09:30.236 10738.429 - 10791.068: 0.8489% ( 2) 00:09:30.236 10791.068 - 10843.708: 0.9104% ( 5) 00:09:30.236 10843.708 - 10896.347: 1.3041% ( 32) 00:09:30.236 10896.347 - 10948.986: 1.4887% ( 15) 00:09:30.236 10948.986 - 11001.626: 1.5625% ( 6) 00:09:30.236 11001.626 - 11054.265: 1.5994% ( 3) 00:09:30.236 11054.265 - 11106.904: 1.6363% ( 3) 00:09:30.236 11106.904 - 11159.544: 1.7840% ( 12) 00:09:30.236 11159.544 - 11212.183: 2.1777% ( 32) 00:09:30.236 11212.183 - 11264.822: 2.4483% ( 22) 00:09:30.236 11264.822 - 11317.462: 2.6329% ( 15) 00:09:30.236 11317.462 - 11370.101: 3.0266% ( 32) 00:09:30.236 11370.101 - 11422.741: 3.4818% ( 37) 00:09:30.236 11422.741 - 11475.380: 3.9370% ( 37) 00:09:30.236 11475.380 - 11528.019: 4.4783% ( 44) 00:09:30.236 11528.019 - 11580.659: 4.8228% ( 28) 00:09:30.236 11580.659 - 11633.298: 5.2042% ( 31) 00:09:30.236 11633.298 - 11685.937: 5.6471% ( 36) 00:09:30.236 11685.937 - 11738.577: 6.1393% ( 40) 00:09:30.236 11738.577 - 11791.216: 6.6314% ( 40) 00:09:30.236 11791.216 - 11843.855: 7.0989% ( 38) 00:09:30.236 11843.855 - 11896.495: 7.7264% ( 51) 00:09:30.236 11896.495 - 11949.134: 8.2554% ( 43) 00:09:30.236 11949.134 - 12001.773: 8.7475% ( 40) 00:09:30.236 12001.773 - 12054.413: 9.1412% ( 32) 00:09:30.236 12054.413 - 12107.052: 9.6088% ( 38) 00:09:30.236 12107.052 - 12159.692: 10.0394% ( 35) 00:09:30.236 12159.692 - 12212.331: 10.5069% ( 38) 00:09:30.236 12212.331 - 12264.970: 10.9006% ( 32) 00:09:30.236 12264.970 - 12317.610: 11.3435% ( 36) 00:09:30.236 12317.610 - 12370.249: 11.7987% ( 37) 00:09:30.236 12370.249 - 12422.888: 12.3524% ( 45) 00:09:30.236 12422.888 - 12475.528: 12.9183% ( 46) 00:09:30.236 12475.528 - 12528.167: 13.3489% ( 35) 00:09:30.236 12528.167 - 12580.806: 13.8164% ( 38) 00:09:30.236 12580.806 - 12633.446: 14.3209% ( 41) 00:09:30.236 12633.446 - 12686.085: 14.7146% ( 32) 00:09:30.236 12686.085 - 12738.724: 15.2559% ( 44) 00:09:30.236 12738.724 - 12791.364: 15.7111% ( 37) 00:09:30.236 12791.364 - 12844.003: 16.0925% ( 31) 00:09:30.236 12844.003 - 12896.643: 16.5354% ( 36) 00:09:30.236 12896.643 - 12949.282: 16.9291% ( 32) 00:09:30.236 12949.282 - 13001.921: 17.4090% ( 39) 00:09:30.236 13001.921 - 13054.561: 17.8765% ( 38) 00:09:30.236 13054.561 - 13107.200: 18.2579% ( 31) 00:09:30.236 13107.200 - 13159.839: 18.6393% ( 31) 00:09:30.236 13159.839 - 13212.479: 19.1929% ( 45) 00:09:30.236 13212.479 - 13265.118: 19.6112% ( 34) 00:09:30.236 13265.118 - 13317.757: 19.9188% ( 25) 00:09:30.236 13317.757 - 13370.397: 20.3002% ( 31) 00:09:30.236 13370.397 - 13423.036: 20.7554% ( 37) 00:09:30.236 13423.036 - 13475.676: 21.2352% ( 39) 00:09:30.236 13475.676 - 13580.954: 22.0842% ( 69) 00:09:30.236 13580.954 - 13686.233: 22.9700% ( 72) 00:09:30.236 13686.233 - 13791.512: 24.0896% ( 91) 00:09:30.236 13791.512 - 13896.790: 25.1722% ( 88) 00:09:30.236 13896.790 - 14002.069: 26.2795% ( 90) 00:09:30.236 14002.069 - 14107.348: 27.3499% ( 87) 00:09:30.236 14107.348 - 14212.627: 28.4941% ( 93) 00:09:30.236 14212.627 - 14317.905: 29.6752% ( 96) 00:09:30.236 14317.905 - 14423.184: 31.0408% ( 111) 00:09:30.236 14423.184 - 14528.463: 32.4803% ( 117) 00:09:30.236 14528.463 - 14633.741: 33.7968% ( 107) 00:09:30.236 14633.741 - 14739.020: 35.0517% ( 102) 00:09:30.236 14739.020 - 14844.299: 36.2943% ( 101) 00:09:30.236 14844.299 - 14949.578: 37.6476% ( 110) 00:09:30.236 14949.578 - 15054.856: 38.9764% ( 108) 00:09:30.236 15054.856 - 15160.135: 40.2559% ( 104) 00:09:30.236 15160.135 - 15265.414: 41.7077% ( 118) 00:09:30.236 15265.414 - 15370.692: 43.0733% ( 111) 00:09:30.236 15370.692 - 15475.971: 44.4882% ( 115) 00:09:30.236 15475.971 - 15581.250: 46.0630% ( 128) 00:09:30.236 15581.250 - 15686.529: 47.7239% ( 135) 00:09:30.236 15686.529 - 15791.807: 49.3479% ( 132) 00:09:30.236 15791.807 - 15897.086: 51.0950% ( 142) 00:09:30.236 15897.086 - 16002.365: 52.7313% ( 133) 00:09:30.236 16002.365 - 16107.643: 54.6260% ( 154) 00:09:30.236 16107.643 - 16212.922: 56.9759% ( 191) 00:09:30.236 16212.922 - 16318.201: 58.9690% ( 162) 00:09:30.236 16318.201 - 16423.480: 60.8145% ( 150) 00:09:30.236 16423.480 - 16528.758: 62.7092% ( 154) 00:09:30.236 16528.758 - 16634.037: 64.6161% ( 155) 00:09:30.236 16634.037 - 16739.316: 66.3632% ( 142) 00:09:30.236 16739.316 - 16844.594: 68.1841% ( 148) 00:09:30.236 16844.594 - 16949.873: 69.8696% ( 137) 00:09:30.236 16949.873 - 17055.152: 71.7397% ( 152) 00:09:30.236 17055.152 - 17160.431: 73.5482% ( 147) 00:09:30.236 17160.431 - 17265.709: 75.1599% ( 131) 00:09:30.236 17265.709 - 17370.988: 76.5625% ( 114) 00:09:30.236 17370.988 - 17476.267: 78.2972% ( 141) 00:09:30.236 17476.267 - 17581.545: 79.8105% ( 123) 00:09:30.236 17581.545 - 17686.824: 81.1885% ( 112) 00:09:30.236 17686.824 - 17792.103: 82.4926% ( 106) 00:09:30.236 17792.103 - 17897.382: 83.6737% ( 96) 00:09:30.236 17897.382 - 18002.660: 84.8671% ( 97) 00:09:30.236 18002.660 - 18107.939: 85.8514% ( 80) 00:09:30.236 18107.939 - 18213.218: 86.9464% ( 89) 00:09:30.236 18213.218 - 18318.496: 87.9798% ( 84) 00:09:30.236 18318.496 - 18423.775: 88.8656% ( 72) 00:09:30.236 18423.775 - 18529.054: 89.7269% ( 70) 00:09:30.236 18529.054 - 18634.333: 90.4897% ( 62) 00:09:30.236 18634.333 - 18739.611: 91.2402% ( 61) 00:09:30.236 18739.611 - 18844.890: 91.8799% ( 52) 00:09:30.236 18844.890 - 18950.169: 92.4090% ( 43) 00:09:30.236 18950.169 - 19055.447: 92.8888% ( 39) 00:09:30.236 19055.447 - 19160.726: 93.2702% ( 31) 00:09:30.236 19160.726 - 19266.005: 93.7869% ( 42) 00:09:30.236 19266.005 - 19371.284: 94.2175% ( 35) 00:09:30.236 19371.284 - 19476.562: 94.5374% ( 26) 00:09:30.236 19476.562 - 19581.841: 94.7958% ( 21) 00:09:30.236 19581.841 - 19687.120: 95.0787% ( 23) 00:09:30.236 19687.120 - 19792.398: 95.3125% ( 19) 00:09:30.236 19792.398 - 19897.677: 95.5463% ( 19) 00:09:30.236 19897.677 - 20002.956: 95.7923% ( 20) 00:09:30.236 20002.956 - 20108.235: 95.9892% ( 16) 00:09:30.236 20108.235 - 20213.513: 96.2229% ( 19) 00:09:30.236 20213.513 - 20318.792: 96.4075% ( 15) 00:09:30.236 20318.792 - 20424.071: 96.6166% ( 17) 00:09:30.236 20424.071 - 20529.349: 96.8012% ( 15) 00:09:30.236 20529.349 - 20634.628: 96.9980% ( 16) 00:09:30.236 20634.628 - 20739.907: 97.1826% ( 15) 00:09:30.236 20739.907 - 20845.186: 97.3302% ( 12) 00:09:30.236 20845.186 - 20950.464: 97.4656% ( 11) 00:09:30.236 20950.464 - 21055.743: 97.6378% ( 14) 00:09:30.236 21055.743 - 21161.022: 97.7608% ( 10) 00:09:30.236 21161.022 - 21266.300: 97.8593% ( 8) 00:09:30.236 21266.300 - 21371.579: 98.0069% ( 12) 00:09:30.236 21371.579 - 21476.858: 98.0807% ( 6) 00:09:30.236 21476.858 - 21582.137: 98.1668% ( 7) 00:09:30.236 21582.137 - 21687.415: 98.2406% ( 6) 00:09:30.236 21687.415 - 21792.694: 98.2653% ( 2) 00:09:30.236 21792.694 - 21897.973: 98.2899% ( 2) 00:09:30.236 21897.973 - 22003.251: 98.3145% ( 2) 00:09:30.236 22003.251 - 22108.530: 98.3514% ( 3) 00:09:30.236 22108.530 - 22213.809: 98.3760% ( 2) 00:09:30.236 22213.809 - 22319.088: 98.4006% ( 2) 00:09:30.236 22319.088 - 22424.366: 98.4129% ( 1) 00:09:30.236 22424.366 - 22529.645: 98.4252% ( 1) 00:09:30.236 26214.400 - 26319.679: 98.4621% ( 3) 00:09:30.236 26319.679 - 26424.957: 98.4744% ( 1) 00:09:30.236 26424.957 - 26530.236: 98.5113% ( 3) 00:09:30.236 26530.236 - 26635.515: 98.5359% ( 2) 00:09:30.236 26635.515 - 26740.794: 98.5605% ( 2) 00:09:30.236 26740.794 - 26846.072: 98.5851% ( 2) 00:09:30.236 26846.072 - 26951.351: 98.6220% ( 3) 00:09:30.236 26951.351 - 27161.908: 98.6836% ( 5) 00:09:30.236 27161.908 - 27372.466: 98.7205% ( 3) 00:09:30.236 27372.466 - 27583.023: 98.7820% ( 5) 00:09:30.236 27583.023 - 27793.581: 98.8312% ( 4) 00:09:30.236 27793.581 - 28004.138: 98.8927% ( 5) 00:09:30.236 28004.138 - 28214.696: 98.9542% ( 5) 00:09:30.236 28214.696 - 28425.253: 99.0034% ( 4) 00:09:30.236 28425.253 - 28635.810: 99.0650% ( 5) 00:09:30.236 28635.810 - 28846.368: 99.1265% ( 5) 00:09:30.236 28846.368 - 29056.925: 99.1757% ( 4) 00:09:30.236 29056.925 - 29267.483: 99.2126% ( 3) 00:09:30.236 38953.124 - 39163.682: 99.2495% ( 3) 00:09:30.237 39163.682 - 39374.239: 99.2987% ( 4) 00:09:30.237 39374.239 - 39584.797: 99.3602% ( 5) 00:09:30.237 39584.797 - 39795.354: 99.3971% ( 3) 00:09:30.237 39795.354 - 40005.912: 99.4464% ( 4) 00:09:30.237 40005.912 - 40216.469: 99.5079% ( 5) 00:09:30.237 40216.469 - 40427.027: 99.5571% ( 4) 00:09:30.237 40427.027 - 40637.584: 99.6186% ( 5) 00:09:30.237 40637.584 - 40848.141: 99.6678% ( 4) 00:09:30.237 40848.141 - 41058.699: 99.7293% ( 5) 00:09:30.237 41058.699 - 41269.256: 99.7785% ( 4) 00:09:30.237 41269.256 - 41479.814: 99.8401% ( 5) 00:09:30.237 41479.814 - 41690.371: 99.8893% ( 4) 00:09:30.237 41690.371 - 41900.929: 99.9508% ( 5) 00:09:30.237 41900.929 - 42111.486: 100.0000% ( 4) 00:09:30.237 00:09:30.237 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:30.237 ============================================================================== 00:09:30.237 Range in us Cumulative IO count 00:09:30.237 9896.199 - 9948.839: 0.0123% ( 1) 00:09:30.237 9948.839 - 10001.478: 0.0369% ( 2) 00:09:30.237 10001.478 - 10054.117: 0.1476% ( 9) 00:09:30.237 10054.117 - 10106.757: 0.1969% ( 4) 00:09:30.237 10106.757 - 10159.396: 0.2830% ( 7) 00:09:30.237 10159.396 - 10212.035: 0.3322% ( 4) 00:09:30.237 10212.035 - 10264.675: 0.4798% ( 12) 00:09:30.237 10264.675 - 10317.314: 0.5413% ( 5) 00:09:30.237 10317.314 - 10369.953: 0.5659% ( 2) 00:09:30.237 10369.953 - 10422.593: 0.6029% ( 3) 00:09:30.237 10422.593 - 10475.232: 0.6275% ( 2) 00:09:30.237 10475.232 - 10527.871: 0.6644% ( 3) 00:09:30.237 10527.871 - 10580.511: 0.7013% ( 3) 00:09:30.237 10580.511 - 10633.150: 0.7382% ( 3) 00:09:30.237 10633.150 - 10685.790: 0.7751% ( 3) 00:09:30.237 10685.790 - 10738.429: 0.7874% ( 1) 00:09:30.237 10738.429 - 10791.068: 0.7997% ( 1) 00:09:30.237 10791.068 - 10843.708: 0.8243% ( 2) 00:09:30.237 10843.708 - 10896.347: 0.8981% ( 6) 00:09:30.237 10896.347 - 10948.986: 0.9473% ( 4) 00:09:30.237 10948.986 - 11001.626: 1.0458% ( 8) 00:09:30.237 11001.626 - 11054.265: 1.2303% ( 15) 00:09:30.237 11054.265 - 11106.904: 1.3656% ( 11) 00:09:30.237 11106.904 - 11159.544: 1.5133% ( 12) 00:09:30.237 11159.544 - 11212.183: 1.7101% ( 16) 00:09:30.237 11212.183 - 11264.822: 1.9685% ( 21) 00:09:30.237 11264.822 - 11317.462: 2.4114% ( 36) 00:09:30.237 11317.462 - 11370.101: 2.8297% ( 34) 00:09:30.237 11370.101 - 11422.741: 3.2603% ( 35) 00:09:30.237 11422.741 - 11475.380: 3.6540% ( 32) 00:09:30.237 11475.380 - 11528.019: 4.2200% ( 46) 00:09:30.237 11528.019 - 11580.659: 4.7982% ( 47) 00:09:30.237 11580.659 - 11633.298: 5.4503% ( 53) 00:09:30.237 11633.298 - 11685.937: 6.0655% ( 50) 00:09:30.237 11685.937 - 11738.577: 6.5699% ( 41) 00:09:30.237 11738.577 - 11791.216: 7.1235% ( 45) 00:09:30.237 11791.216 - 11843.855: 7.6280% ( 41) 00:09:30.237 11843.855 - 11896.495: 8.2185% ( 48) 00:09:30.237 11896.495 - 11949.134: 8.7844% ( 46) 00:09:30.237 11949.134 - 12001.773: 9.4488% ( 54) 00:09:30.237 12001.773 - 12054.413: 10.0148% ( 46) 00:09:30.237 12054.413 - 12107.052: 10.5807% ( 46) 00:09:30.237 12107.052 - 12159.692: 11.0482% ( 38) 00:09:30.237 12159.692 - 12212.331: 11.5281% ( 39) 00:09:30.237 12212.331 - 12264.970: 11.9587% ( 35) 00:09:30.237 12264.970 - 12317.610: 12.3647% ( 33) 00:09:30.237 12317.610 - 12370.249: 12.8322% ( 38) 00:09:30.237 12370.249 - 12422.888: 13.1029% ( 22) 00:09:30.237 12422.888 - 12475.528: 13.5089% ( 33) 00:09:30.237 12475.528 - 12528.167: 13.9641% ( 37) 00:09:30.237 12528.167 - 12580.806: 14.4193% ( 37) 00:09:30.237 12580.806 - 12633.446: 14.9360% ( 42) 00:09:30.237 12633.446 - 12686.085: 15.5020% ( 46) 00:09:30.237 12686.085 - 12738.724: 16.1048% ( 49) 00:09:30.237 12738.724 - 12791.364: 16.6462% ( 44) 00:09:30.237 12791.364 - 12844.003: 17.1875% ( 44) 00:09:30.237 12844.003 - 12896.643: 17.5689% ( 31) 00:09:30.237 12896.643 - 12949.282: 18.0487% ( 39) 00:09:30.237 12949.282 - 13001.921: 18.5039% ( 37) 00:09:30.237 13001.921 - 13054.561: 18.8976% ( 32) 00:09:30.237 13054.561 - 13107.200: 19.2790% ( 31) 00:09:30.237 13107.200 - 13159.839: 19.6481% ( 30) 00:09:30.237 13159.839 - 13212.479: 20.0049% ( 29) 00:09:30.237 13212.479 - 13265.118: 20.3986% ( 32) 00:09:30.237 13265.118 - 13317.757: 20.8292% ( 35) 00:09:30.237 13317.757 - 13370.397: 21.2229% ( 32) 00:09:30.237 13370.397 - 13423.036: 21.5674% ( 28) 00:09:30.237 13423.036 - 13475.676: 21.9365% ( 30) 00:09:30.237 13475.676 - 13580.954: 22.7977% ( 70) 00:09:30.237 13580.954 - 13686.233: 23.5851% ( 64) 00:09:30.237 13686.233 - 13791.512: 24.6555% ( 87) 00:09:30.237 13791.512 - 13896.790: 25.8735% ( 99) 00:09:30.237 13896.790 - 14002.069: 26.9562% ( 88) 00:09:30.237 14002.069 - 14107.348: 27.8912% ( 76) 00:09:30.237 14107.348 - 14212.627: 28.7771% ( 72) 00:09:30.237 14212.627 - 14317.905: 30.0566% ( 104) 00:09:30.237 14317.905 - 14423.184: 31.3607% ( 106) 00:09:30.237 14423.184 - 14528.463: 32.4803% ( 91) 00:09:30.237 14528.463 - 14633.741: 33.7352% ( 102) 00:09:30.237 14633.741 - 14739.020: 34.8302% ( 89) 00:09:30.237 14739.020 - 14844.299: 35.6422% ( 66) 00:09:30.237 14844.299 - 14949.578: 36.5896% ( 77) 00:09:30.237 14949.578 - 15054.856: 37.9921% ( 114) 00:09:30.237 15054.856 - 15160.135: 39.4439% ( 118) 00:09:30.237 15160.135 - 15265.414: 40.7849% ( 109) 00:09:30.237 15265.414 - 15370.692: 42.0030% ( 99) 00:09:30.237 15370.692 - 15475.971: 43.2333% ( 100) 00:09:30.237 15475.971 - 15581.250: 44.5620% ( 108) 00:09:30.237 15581.250 - 15686.529: 46.2352% ( 136) 00:09:30.237 15686.529 - 15791.807: 47.6747% ( 117) 00:09:30.237 15791.807 - 15897.086: 49.5694% ( 154) 00:09:30.237 15897.086 - 16002.365: 51.5133% ( 158) 00:09:30.237 16002.365 - 16107.643: 53.3711% ( 151) 00:09:30.237 16107.643 - 16212.922: 55.2781% ( 155) 00:09:30.237 16212.922 - 16318.201: 57.5172% ( 182) 00:09:30.237 16318.201 - 16423.480: 59.5103% ( 162) 00:09:30.237 16423.480 - 16528.758: 61.3312% ( 148) 00:09:30.237 16528.758 - 16634.037: 63.4227% ( 170) 00:09:30.237 16634.037 - 16739.316: 65.3912% ( 160) 00:09:30.237 16739.316 - 16844.594: 67.3228% ( 157) 00:09:30.237 16844.594 - 16949.873: 69.3036% ( 161) 00:09:30.237 16949.873 - 17055.152: 71.3337% ( 165) 00:09:30.237 17055.152 - 17160.431: 73.2160% ( 153) 00:09:30.237 17160.431 - 17265.709: 75.3691% ( 175) 00:09:30.237 17265.709 - 17370.988: 76.9316% ( 127) 00:09:30.237 17370.988 - 17476.267: 78.9001% ( 160) 00:09:30.237 17476.267 - 17581.545: 80.6348% ( 141) 00:09:30.237 17581.545 - 17686.824: 81.9759% ( 109) 00:09:30.237 17686.824 - 17792.103: 83.1939% ( 99) 00:09:30.237 17792.103 - 17897.382: 84.2889% ( 89) 00:09:30.237 17897.382 - 18002.660: 85.4208% ( 92) 00:09:30.237 18002.660 - 18107.939: 86.5034% ( 88) 00:09:30.237 18107.939 - 18213.218: 87.4631% ( 78) 00:09:30.237 18213.218 - 18318.496: 88.4719% ( 82) 00:09:30.237 18318.496 - 18423.775: 89.3578% ( 72) 00:09:30.237 18423.775 - 18529.054: 90.2190% ( 70) 00:09:30.237 18529.054 - 18634.333: 91.1294% ( 74) 00:09:30.237 18634.333 - 18739.611: 91.8307% ( 57) 00:09:30.237 18739.611 - 18844.890: 92.4705% ( 52) 00:09:30.237 18844.890 - 18950.169: 93.0979% ( 51) 00:09:30.237 18950.169 - 19055.447: 93.5531% ( 37) 00:09:30.237 19055.447 - 19160.726: 94.0576% ( 41) 00:09:30.237 19160.726 - 19266.005: 94.5620% ( 41) 00:09:30.237 19266.005 - 19371.284: 94.9803% ( 34) 00:09:30.237 19371.284 - 19476.562: 95.3986% ( 34) 00:09:30.237 19476.562 - 19581.841: 95.7800% ( 31) 00:09:30.237 19581.841 - 19687.120: 96.0999% ( 26) 00:09:30.237 19687.120 - 19792.398: 96.2968% ( 16) 00:09:30.237 19792.398 - 19897.677: 96.4936% ( 16) 00:09:30.237 19897.677 - 20002.956: 96.6658% ( 14) 00:09:30.237 20002.956 - 20108.235: 96.8381% ( 14) 00:09:30.237 20108.235 - 20213.513: 96.9857% ( 12) 00:09:30.237 20213.513 - 20318.792: 97.0965% ( 9) 00:09:30.237 20318.792 - 20424.071: 97.1949% ( 8) 00:09:30.237 20424.071 - 20529.349: 97.3179% ( 10) 00:09:30.237 20529.349 - 20634.628: 97.4902% ( 14) 00:09:30.237 20634.628 - 20739.907: 97.5763% ( 7) 00:09:30.237 20739.907 - 20845.186: 97.6870% ( 9) 00:09:30.237 20845.186 - 20950.464: 97.7854% ( 8) 00:09:30.237 20950.464 - 21055.743: 97.8716% ( 7) 00:09:30.237 21055.743 - 21161.022: 97.9823% ( 9) 00:09:30.237 21161.022 - 21266.300: 98.0807% ( 8) 00:09:30.237 21266.300 - 21371.579: 98.1791% ( 8) 00:09:30.237 21371.579 - 21476.858: 98.2530% ( 6) 00:09:30.237 21476.858 - 21582.137: 98.3145% ( 5) 00:09:30.237 21582.137 - 21687.415: 98.3760% ( 5) 00:09:30.237 21687.415 - 21792.694: 98.4252% ( 4) 00:09:30.237 25898.564 - 26003.843: 98.4498% ( 2) 00:09:30.237 26003.843 - 26109.121: 98.4990% ( 4) 00:09:30.237 26109.121 - 26214.400: 98.5359% ( 3) 00:09:30.237 26214.400 - 26319.679: 98.5728% ( 3) 00:09:30.237 26319.679 - 26424.957: 98.5974% ( 2) 00:09:30.237 26424.957 - 26530.236: 98.6220% ( 2) 00:09:30.237 26530.236 - 26635.515: 98.6590% ( 3) 00:09:30.237 26635.515 - 26740.794: 98.6959% ( 3) 00:09:30.237 26740.794 - 26846.072: 98.7205% ( 2) 00:09:30.237 26846.072 - 26951.351: 98.7574% ( 3) 00:09:30.237 26951.351 - 27161.908: 98.8312% ( 6) 00:09:30.237 27161.908 - 27372.466: 98.8927% ( 5) 00:09:30.237 27372.466 - 27583.023: 98.9542% ( 5) 00:09:30.237 27583.023 - 27793.581: 99.0281% ( 6) 00:09:30.237 27793.581 - 28004.138: 99.1019% ( 6) 00:09:30.237 28004.138 - 28214.696: 99.1634% ( 5) 00:09:30.237 28214.696 - 28425.253: 99.2126% ( 4) 00:09:30.237 37689.780 - 37900.337: 99.2741% ( 5) 00:09:30.237 37900.337 - 38110.895: 99.3356% ( 5) 00:09:30.237 38110.895 - 38321.452: 99.3971% ( 5) 00:09:30.237 38321.452 - 38532.010: 99.4710% ( 6) 00:09:30.237 38532.010 - 38742.567: 99.5325% ( 5) 00:09:30.237 38742.567 - 38953.124: 99.5940% ( 5) 00:09:30.237 38953.124 - 39163.682: 99.6678% ( 6) 00:09:30.237 39163.682 - 39374.239: 99.7416% ( 6) 00:09:30.237 39374.239 - 39584.797: 99.8031% ( 5) 00:09:30.237 39584.797 - 39795.354: 99.8647% ( 5) 00:09:30.238 39795.354 - 40005.912: 99.9262% ( 5) 00:09:30.238 40005.912 - 40216.469: 99.9877% ( 5) 00:09:30.238 40216.469 - 40427.027: 100.0000% ( 1) 00:09:30.238 00:09:30.238 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:30.238 ============================================================================== 00:09:30.238 Range in us Cumulative IO count 00:09:30.238 9106.609 - 9159.248: 0.0246% ( 2) 00:09:30.238 9159.248 - 9211.888: 0.0861% ( 5) 00:09:30.238 9211.888 - 9264.527: 0.1476% ( 5) 00:09:30.238 9264.527 - 9317.166: 0.2215% ( 6) 00:09:30.238 9317.166 - 9369.806: 0.3076% ( 7) 00:09:30.238 9369.806 - 9422.445: 0.3937% ( 7) 00:09:30.238 9422.445 - 9475.084: 0.4429% ( 4) 00:09:30.238 9475.084 - 9527.724: 0.4798% ( 3) 00:09:30.238 9527.724 - 9580.363: 0.5044% ( 2) 00:09:30.238 9580.363 - 9633.002: 0.5413% ( 3) 00:09:30.238 9633.002 - 9685.642: 0.5659% ( 2) 00:09:30.238 9685.642 - 9738.281: 0.6029% ( 3) 00:09:30.238 9738.281 - 9790.920: 0.6275% ( 2) 00:09:30.238 9790.920 - 9843.560: 0.6521% ( 2) 00:09:30.238 9843.560 - 9896.199: 0.6890% ( 3) 00:09:30.238 9896.199 - 9948.839: 0.7382% ( 4) 00:09:30.238 9948.839 - 10001.478: 0.7751% ( 3) 00:09:30.238 10001.478 - 10054.117: 0.8243% ( 4) 00:09:30.238 10054.117 - 10106.757: 0.8858% ( 5) 00:09:30.238 10106.757 - 10159.396: 0.9350% ( 4) 00:09:30.238 10159.396 - 10212.035: 1.0335% ( 8) 00:09:30.238 10212.035 - 10264.675: 1.1688% ( 11) 00:09:30.238 10264.675 - 10317.314: 1.2426% ( 6) 00:09:30.238 10317.314 - 10369.953: 1.3410% ( 8) 00:09:30.238 10369.953 - 10422.593: 1.4395% ( 8) 00:09:30.238 10422.593 - 10475.232: 1.5256% ( 7) 00:09:30.238 10475.232 - 10527.871: 1.5994% ( 6) 00:09:30.238 10527.871 - 10580.511: 1.7101% ( 9) 00:09:30.238 10580.511 - 10633.150: 1.8086% ( 8) 00:09:30.238 10633.150 - 10685.790: 1.8824% ( 6) 00:09:30.238 10685.790 - 10738.429: 1.9316% ( 4) 00:09:30.238 10738.429 - 10791.068: 1.9931% ( 5) 00:09:30.238 10791.068 - 10843.708: 2.0423% ( 4) 00:09:30.238 10843.708 - 10896.347: 2.1161% ( 6) 00:09:30.238 10896.347 - 10948.986: 2.1654% ( 4) 00:09:30.238 10948.986 - 11001.626: 2.2269% ( 5) 00:09:30.238 11001.626 - 11054.265: 2.2638% ( 3) 00:09:30.238 11054.265 - 11106.904: 2.3499% ( 7) 00:09:30.238 11106.904 - 11159.544: 2.5098% ( 13) 00:09:30.238 11159.544 - 11212.183: 2.6329% ( 10) 00:09:30.238 11212.183 - 11264.822: 2.7805% ( 12) 00:09:30.238 11264.822 - 11317.462: 3.0143% ( 19) 00:09:30.238 11317.462 - 11370.101: 3.4449% ( 35) 00:09:30.238 11370.101 - 11422.741: 3.8509% ( 33) 00:09:30.238 11422.741 - 11475.380: 4.1708% ( 26) 00:09:30.238 11475.380 - 11528.019: 4.4414% ( 22) 00:09:30.238 11528.019 - 11580.659: 4.6260% ( 15) 00:09:30.238 11580.659 - 11633.298: 4.8967% ( 22) 00:09:30.238 11633.298 - 11685.937: 5.2165% ( 26) 00:09:30.238 11685.937 - 11738.577: 5.6718% ( 37) 00:09:30.238 11738.577 - 11791.216: 6.2377% ( 46) 00:09:30.238 11791.216 - 11843.855: 6.7298% ( 40) 00:09:30.238 11843.855 - 11896.495: 7.0989% ( 30) 00:09:30.238 11896.495 - 11949.134: 7.6156% ( 42) 00:09:30.238 11949.134 - 12001.773: 8.1078% ( 40) 00:09:30.238 12001.773 - 12054.413: 8.5753% ( 38) 00:09:30.238 12054.413 - 12107.052: 9.0920% ( 42) 00:09:30.238 12107.052 - 12159.692: 9.5719% ( 39) 00:09:30.238 12159.692 - 12212.331: 10.0763% ( 41) 00:09:30.238 12212.331 - 12264.970: 10.6791% ( 49) 00:09:30.238 12264.970 - 12317.610: 11.1344% ( 37) 00:09:30.238 12317.610 - 12370.249: 11.6634% ( 43) 00:09:30.238 12370.249 - 12422.888: 12.1186% ( 37) 00:09:30.238 12422.888 - 12475.528: 12.6476% ( 43) 00:09:30.238 12475.528 - 12528.167: 13.1890% ( 44) 00:09:30.238 12528.167 - 12580.806: 13.6565% ( 38) 00:09:30.238 12580.806 - 12633.446: 14.1363% ( 39) 00:09:30.238 12633.446 - 12686.085: 14.6654% ( 43) 00:09:30.238 12686.085 - 12738.724: 15.0837% ( 34) 00:09:30.238 12738.724 - 12791.364: 15.5143% ( 35) 00:09:30.238 12791.364 - 12844.003: 15.9572% ( 36) 00:09:30.238 12844.003 - 12896.643: 16.4247% ( 38) 00:09:30.238 12896.643 - 12949.282: 16.8922% ( 38) 00:09:30.238 12949.282 - 13001.921: 17.3351% ( 36) 00:09:30.238 13001.921 - 13054.561: 17.8150% ( 39) 00:09:30.238 13054.561 - 13107.200: 18.3932% ( 47) 00:09:30.238 13107.200 - 13159.839: 18.8484% ( 37) 00:09:30.238 13159.839 - 13212.479: 19.2790% ( 35) 00:09:30.238 13212.479 - 13265.118: 19.6235% ( 28) 00:09:30.238 13265.118 - 13317.757: 20.1403% ( 42) 00:09:30.238 13317.757 - 13370.397: 20.7308% ( 48) 00:09:30.238 13370.397 - 13423.036: 21.2968% ( 46) 00:09:30.238 13423.036 - 13475.676: 21.8012% ( 41) 00:09:30.238 13475.676 - 13580.954: 22.7854% ( 80) 00:09:30.238 13580.954 - 13686.233: 23.6836% ( 73) 00:09:30.238 13686.233 - 13791.512: 24.6678% ( 80) 00:09:30.238 13791.512 - 13896.790: 25.5044% ( 68) 00:09:30.238 13896.790 - 14002.069: 26.1442% ( 52) 00:09:30.238 14002.069 - 14107.348: 27.1284% ( 80) 00:09:30.238 14107.348 - 14212.627: 27.9897% ( 70) 00:09:30.238 14212.627 - 14317.905: 29.3922% ( 114) 00:09:30.238 14317.905 - 14423.184: 30.8440% ( 118) 00:09:30.238 14423.184 - 14528.463: 32.1112% ( 103) 00:09:30.238 14528.463 - 14633.741: 33.3784% ( 103) 00:09:30.238 14633.741 - 14739.020: 34.8425% ( 119) 00:09:30.238 14739.020 - 14844.299: 36.4665% ( 132) 00:09:30.238 14844.299 - 14949.578: 38.2013% ( 141) 00:09:30.238 14949.578 - 15054.856: 39.7884% ( 129) 00:09:30.238 15054.856 - 15160.135: 41.6462% ( 151) 00:09:30.238 15160.135 - 15265.414: 43.5285% ( 153) 00:09:30.238 15265.414 - 15370.692: 45.0664% ( 125) 00:09:30.238 15370.692 - 15475.971: 46.5674% ( 122) 00:09:30.238 15475.971 - 15581.250: 47.9823% ( 115) 00:09:30.238 15581.250 - 15686.529: 49.2987% ( 107) 00:09:30.238 15686.529 - 15791.807: 50.4675% ( 95) 00:09:30.238 15791.807 - 15897.086: 51.6117% ( 93) 00:09:30.238 15897.086 - 16002.365: 52.9281% ( 107) 00:09:30.238 16002.365 - 16107.643: 54.3922% ( 119) 00:09:30.238 16107.643 - 16212.922: 55.9424% ( 126) 00:09:30.238 16212.922 - 16318.201: 57.4926% ( 126) 00:09:30.238 16318.201 - 16423.480: 59.1535% ( 135) 00:09:30.238 16423.480 - 16528.758: 60.9129% ( 143) 00:09:30.238 16528.758 - 16634.037: 62.4754% ( 127) 00:09:30.238 16634.037 - 16739.316: 64.2347% ( 143) 00:09:30.238 16739.316 - 16844.594: 65.9818% ( 142) 00:09:30.238 16844.594 - 16949.873: 68.2579% ( 185) 00:09:30.238 16949.873 - 17055.152: 70.1403% ( 153) 00:09:30.238 17055.152 - 17160.431: 72.1211% ( 161) 00:09:30.238 17160.431 - 17265.709: 73.7451% ( 132) 00:09:30.238 17265.709 - 17370.988: 75.4552% ( 139) 00:09:30.238 17370.988 - 17476.267: 77.4237% ( 160) 00:09:30.238 17476.267 - 17581.545: 79.1339% ( 139) 00:09:30.238 17581.545 - 17686.824: 80.8563% ( 140) 00:09:30.238 17686.824 - 17792.103: 82.8494% ( 162) 00:09:30.238 17792.103 - 17897.382: 84.5965% ( 142) 00:09:30.238 17897.382 - 18002.660: 86.1220% ( 124) 00:09:30.238 18002.660 - 18107.939: 87.3278% ( 98) 00:09:30.238 18107.939 - 18213.218: 88.5089% ( 96) 00:09:30.238 18213.218 - 18318.496: 89.6161% ( 90) 00:09:30.238 18318.496 - 18423.775: 90.6742% ( 86) 00:09:30.238 18423.775 - 18529.054: 91.5846% ( 74) 00:09:30.238 18529.054 - 18634.333: 92.3351% ( 61) 00:09:30.238 18634.333 - 18739.611: 92.9749% ( 52) 00:09:30.238 18739.611 - 18844.890: 93.5039% ( 43) 00:09:30.238 18844.890 - 18950.169: 94.0330% ( 43) 00:09:30.238 18950.169 - 19055.447: 94.4759% ( 36) 00:09:30.238 19055.447 - 19160.726: 94.9803% ( 41) 00:09:30.238 19160.726 - 19266.005: 95.3863% ( 33) 00:09:30.238 19266.005 - 19371.284: 95.8292% ( 36) 00:09:30.238 19371.284 - 19476.562: 96.2475% ( 34) 00:09:30.238 19476.562 - 19581.841: 96.5674% ( 26) 00:09:30.238 19581.841 - 19687.120: 96.8750% ( 25) 00:09:30.238 19687.120 - 19792.398: 97.0965% ( 18) 00:09:30.238 19792.398 - 19897.677: 97.2810% ( 15) 00:09:30.238 19897.677 - 20002.956: 97.3548% ( 6) 00:09:30.238 20002.956 - 20108.235: 97.4163% ( 5) 00:09:30.238 20108.235 - 20213.513: 97.5025% ( 7) 00:09:30.238 20213.513 - 20318.792: 97.5763% ( 6) 00:09:30.238 20318.792 - 20424.071: 97.6255% ( 4) 00:09:30.238 20424.071 - 20529.349: 97.6378% ( 1) 00:09:30.238 20739.907 - 20845.186: 97.6870% ( 4) 00:09:30.238 20845.186 - 20950.464: 97.7362% ( 4) 00:09:30.238 20950.464 - 21055.743: 97.8100% ( 6) 00:09:30.238 21055.743 - 21161.022: 97.8593% ( 4) 00:09:30.238 21161.022 - 21266.300: 97.9208% ( 5) 00:09:30.238 21266.300 - 21371.579: 97.9823% ( 5) 00:09:30.238 21371.579 - 21476.858: 98.0438% ( 5) 00:09:30.238 21476.858 - 21582.137: 98.1053% ( 5) 00:09:30.238 21582.137 - 21687.415: 98.1545% ( 4) 00:09:30.238 21687.415 - 21792.694: 98.2160% ( 5) 00:09:30.238 21792.694 - 21897.973: 98.2899% ( 6) 00:09:30.238 21897.973 - 22003.251: 98.3514% ( 5) 00:09:30.238 22003.251 - 22108.530: 98.4252% ( 6) 00:09:30.238 26214.400 - 26319.679: 98.4498% ( 2) 00:09:30.238 26319.679 - 26424.957: 98.4867% ( 3) 00:09:30.238 26424.957 - 26530.236: 98.5236% ( 3) 00:09:30.238 26530.236 - 26635.515: 98.5605% ( 3) 00:09:30.238 26635.515 - 26740.794: 98.5851% ( 2) 00:09:30.238 26740.794 - 26846.072: 98.6220% ( 3) 00:09:30.238 26846.072 - 26951.351: 98.6590% ( 3) 00:09:30.238 26951.351 - 27161.908: 98.7205% ( 5) 00:09:30.238 27161.908 - 27372.466: 98.7820% ( 5) 00:09:30.238 27372.466 - 27583.023: 98.8558% ( 6) 00:09:30.238 27583.023 - 27793.581: 98.9173% ( 5) 00:09:30.238 27793.581 - 28004.138: 98.9788% ( 5) 00:09:30.238 28004.138 - 28214.696: 99.0404% ( 5) 00:09:30.238 28214.696 - 28425.253: 99.1019% ( 5) 00:09:30.238 28425.253 - 28635.810: 99.1634% ( 5) 00:09:30.238 28635.810 - 28846.368: 99.2126% ( 4) 00:09:30.238 37268.665 - 37479.222: 99.2618% ( 4) 00:09:30.238 37479.222 - 37689.780: 99.3233% ( 5) 00:09:30.238 37689.780 - 37900.337: 99.3971% ( 6) 00:09:30.238 37900.337 - 38110.895: 99.4710% ( 6) 00:09:30.238 38110.895 - 38321.452: 99.5448% ( 6) 00:09:30.238 38321.452 - 38532.010: 99.6063% ( 5) 00:09:30.238 38532.010 - 38742.567: 99.6678% ( 5) 00:09:30.238 38742.567 - 38953.124: 99.7416% ( 6) 00:09:30.239 38953.124 - 39163.682: 99.8031% ( 5) 00:09:30.239 39163.682 - 39374.239: 99.8647% ( 5) 00:09:30.239 39374.239 - 39584.797: 99.9385% ( 6) 00:09:30.239 39584.797 - 39795.354: 100.0000% ( 5) 00:09:30.239 00:09:30.239 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:30.239 ============================================================================== 00:09:30.239 Range in us Cumulative IO count 00:09:30.239 9317.166 - 9369.806: 0.0738% ( 6) 00:09:30.239 9369.806 - 9422.445: 0.1722% ( 8) 00:09:30.239 9422.445 - 9475.084: 0.2953% ( 10) 00:09:30.239 9475.084 - 9527.724: 0.4183% ( 10) 00:09:30.239 9527.724 - 9580.363: 0.4921% ( 6) 00:09:30.239 9580.363 - 9633.002: 0.5906% ( 8) 00:09:30.239 9633.002 - 9685.642: 0.6521% ( 5) 00:09:30.239 9685.642 - 9738.281: 0.7136% ( 5) 00:09:30.239 9738.281 - 9790.920: 0.7628% ( 4) 00:09:30.239 9790.920 - 9843.560: 0.8366% ( 6) 00:09:30.239 9843.560 - 9896.199: 0.8981% ( 5) 00:09:30.239 9896.199 - 9948.839: 0.9596% ( 5) 00:09:30.239 9948.839 - 10001.478: 1.0335% ( 6) 00:09:30.239 10001.478 - 10054.117: 1.0950% ( 5) 00:09:30.239 10054.117 - 10106.757: 1.1688% ( 6) 00:09:30.239 10106.757 - 10159.396: 1.2426% ( 6) 00:09:30.239 10159.396 - 10212.035: 1.3041% ( 5) 00:09:30.239 10212.035 - 10264.675: 1.3287% ( 2) 00:09:30.239 10264.675 - 10317.314: 1.3656% ( 3) 00:09:30.239 10317.314 - 10369.953: 1.3903% ( 2) 00:09:30.239 10369.953 - 10422.593: 1.4764% ( 7) 00:09:30.239 10422.593 - 10475.232: 1.5748% ( 8) 00:09:30.239 10475.232 - 10527.871: 1.6732% ( 8) 00:09:30.239 10527.871 - 10580.511: 1.7717% ( 8) 00:09:30.239 10580.511 - 10633.150: 1.8578% ( 7) 00:09:30.239 10633.150 - 10685.790: 1.9562% ( 8) 00:09:30.239 10685.790 - 10738.429: 2.0792% ( 10) 00:09:30.239 10738.429 - 10791.068: 2.1654% ( 7) 00:09:30.239 10791.068 - 10843.708: 2.2146% ( 4) 00:09:30.239 10843.708 - 10896.347: 2.3130% ( 8) 00:09:30.239 10896.347 - 10948.986: 2.4237% ( 9) 00:09:30.239 10948.986 - 11001.626: 2.4975% ( 6) 00:09:30.239 11001.626 - 11054.265: 2.6083% ( 9) 00:09:30.239 11054.265 - 11106.904: 2.6698% ( 5) 00:09:30.239 11106.904 - 11159.544: 2.6821% ( 1) 00:09:30.239 11159.544 - 11212.183: 2.7190% ( 3) 00:09:30.239 11212.183 - 11264.822: 2.7559% ( 3) 00:09:30.239 11264.822 - 11317.462: 2.9281% ( 14) 00:09:30.239 11317.462 - 11370.101: 3.1865% ( 21) 00:09:30.239 11370.101 - 11422.741: 3.4326% ( 20) 00:09:30.239 11422.741 - 11475.380: 3.7402% ( 25) 00:09:30.239 11475.380 - 11528.019: 4.1216% ( 31) 00:09:30.239 11528.019 - 11580.659: 4.5030% ( 31) 00:09:30.239 11580.659 - 11633.298: 4.7859% ( 23) 00:09:30.239 11633.298 - 11685.937: 5.1427% ( 29) 00:09:30.239 11685.937 - 11738.577: 5.5856% ( 36) 00:09:30.239 11738.577 - 11791.216: 5.9547% ( 30) 00:09:30.239 11791.216 - 11843.855: 6.3115% ( 29) 00:09:30.239 11843.855 - 11896.495: 6.8036% ( 40) 00:09:30.239 11896.495 - 11949.134: 7.3819% ( 47) 00:09:30.239 11949.134 - 12001.773: 7.8494% ( 38) 00:09:30.239 12001.773 - 12054.413: 8.3415% ( 40) 00:09:30.239 12054.413 - 12107.052: 8.8706% ( 43) 00:09:30.239 12107.052 - 12159.692: 9.4611% ( 48) 00:09:30.239 12159.692 - 12212.331: 10.0886% ( 51) 00:09:30.239 12212.331 - 12264.970: 10.5438% ( 37) 00:09:30.239 12264.970 - 12317.610: 10.9867% ( 36) 00:09:30.239 12317.610 - 12370.249: 11.4173% ( 35) 00:09:30.239 12370.249 - 12422.888: 11.9094% ( 40) 00:09:30.239 12422.888 - 12475.528: 12.3155% ( 33) 00:09:30.239 12475.528 - 12528.167: 12.8199% ( 41) 00:09:30.239 12528.167 - 12580.806: 13.3735% ( 45) 00:09:30.239 12580.806 - 12633.446: 13.8533% ( 39) 00:09:30.239 12633.446 - 12686.085: 14.2840% ( 35) 00:09:30.239 12686.085 - 12738.724: 14.6777% ( 32) 00:09:30.239 12738.724 - 12791.364: 15.1206% ( 36) 00:09:30.239 12791.364 - 12844.003: 15.5512% ( 35) 00:09:30.239 12844.003 - 12896.643: 15.9941% ( 36) 00:09:30.239 12896.643 - 12949.282: 16.4862% ( 40) 00:09:30.239 12949.282 - 13001.921: 17.0030% ( 42) 00:09:30.239 13001.921 - 13054.561: 17.5074% ( 41) 00:09:30.239 13054.561 - 13107.200: 18.0856% ( 47) 00:09:30.239 13107.200 - 13159.839: 18.7254% ( 52) 00:09:30.239 13159.839 - 13212.479: 19.3159% ( 48) 00:09:30.239 13212.479 - 13265.118: 19.8081% ( 40) 00:09:30.239 13265.118 - 13317.757: 20.3371% ( 43) 00:09:30.239 13317.757 - 13370.397: 20.8415% ( 41) 00:09:30.239 13370.397 - 13423.036: 21.2721% ( 35) 00:09:30.239 13423.036 - 13475.676: 21.6658% ( 32) 00:09:30.239 13475.676 - 13580.954: 22.6747% ( 82) 00:09:30.239 13580.954 - 13686.233: 23.8927% ( 99) 00:09:30.239 13686.233 - 13791.512: 25.5536% ( 135) 00:09:30.239 13791.512 - 13896.790: 26.9808% ( 116) 00:09:30.239 13896.790 - 14002.069: 28.2111% ( 100) 00:09:30.239 14002.069 - 14107.348: 29.2569% ( 85) 00:09:30.239 14107.348 - 14212.627: 30.2904% ( 84) 00:09:30.239 14212.627 - 14317.905: 31.7298% ( 117) 00:09:30.239 14317.905 - 14423.184: 32.9601% ( 100) 00:09:30.239 14423.184 - 14528.463: 33.8460% ( 72) 00:09:30.239 14528.463 - 14633.741: 35.0394% ( 97) 00:09:30.239 14633.741 - 14739.020: 36.1713% ( 92) 00:09:30.239 14739.020 - 14844.299: 37.4877% ( 107) 00:09:30.239 14844.299 - 14949.578: 38.9641% ( 120) 00:09:30.239 14949.578 - 15054.856: 40.3420% ( 112) 00:09:30.239 15054.856 - 15160.135: 41.6831% ( 109) 00:09:30.239 15160.135 - 15265.414: 43.1594% ( 120) 00:09:30.239 15265.414 - 15370.692: 44.5743% ( 115) 00:09:30.239 15370.692 - 15475.971: 46.0138% ( 117) 00:09:30.239 15475.971 - 15581.250: 47.4286% ( 115) 00:09:30.239 15581.250 - 15686.529: 48.9173% ( 121) 00:09:30.239 15686.529 - 15791.807: 50.4798% ( 127) 00:09:30.239 15791.807 - 15897.086: 51.9685% ( 121) 00:09:30.239 15897.086 - 16002.365: 53.1373% ( 95) 00:09:30.239 16002.365 - 16107.643: 54.3676% ( 100) 00:09:30.239 16107.643 - 16212.922: 55.5487% ( 96) 00:09:30.239 16212.922 - 16318.201: 56.8036% ( 102) 00:09:30.239 16318.201 - 16423.480: 58.1939% ( 113) 00:09:30.239 16423.480 - 16528.758: 59.7441% ( 126) 00:09:30.239 16528.758 - 16634.037: 61.2697% ( 124) 00:09:30.239 16634.037 - 16739.316: 63.1152% ( 150) 00:09:30.239 16739.316 - 16844.594: 65.2067% ( 170) 00:09:30.239 16844.594 - 16949.873: 67.1506% ( 158) 00:09:30.239 16949.873 - 17055.152: 69.1314% ( 161) 00:09:30.239 17055.152 - 17160.431: 71.1491% ( 164) 00:09:30.239 17160.431 - 17265.709: 73.2037% ( 167) 00:09:30.239 17265.709 - 17370.988: 75.0123% ( 147) 00:09:30.239 17370.988 - 17476.267: 76.8824% ( 152) 00:09:30.239 17476.267 - 17581.545: 78.8386% ( 159) 00:09:30.239 17581.545 - 17686.824: 80.3888% ( 126) 00:09:30.239 17686.824 - 17792.103: 81.9144% ( 124) 00:09:30.239 17792.103 - 17897.382: 83.5261% ( 131) 00:09:30.239 17897.382 - 18002.660: 85.2608% ( 141) 00:09:30.239 18002.660 - 18107.939: 87.1186% ( 151) 00:09:30.239 18107.939 - 18213.218: 88.6934% ( 128) 00:09:30.239 18213.218 - 18318.496: 89.9483% ( 102) 00:09:30.239 18318.496 - 18423.775: 91.0433% ( 89) 00:09:30.239 18423.775 - 18529.054: 92.0153% ( 79) 00:09:30.239 18529.054 - 18634.333: 92.9749% ( 78) 00:09:30.239 18634.333 - 18739.611: 93.7746% ( 65) 00:09:30.239 18739.611 - 18844.890: 94.2913% ( 42) 00:09:30.239 18844.890 - 18950.169: 94.7343% ( 36) 00:09:30.239 18950.169 - 19055.447: 95.1526% ( 34) 00:09:30.239 19055.447 - 19160.726: 95.5832% ( 35) 00:09:30.239 19160.726 - 19266.005: 95.8907% ( 25) 00:09:30.239 19266.005 - 19371.284: 96.1983% ( 25) 00:09:30.239 19371.284 - 19476.562: 96.5182% ( 26) 00:09:30.239 19476.562 - 19581.841: 96.8258% ( 25) 00:09:30.239 19581.841 - 19687.120: 97.0842% ( 21) 00:09:30.239 19687.120 - 19792.398: 97.2441% ( 13) 00:09:30.239 19792.398 - 19897.677: 97.3056% ( 5) 00:09:30.239 19897.677 - 20002.956: 97.3671% ( 5) 00:09:30.239 20002.956 - 20108.235: 97.4163% ( 4) 00:09:30.239 20108.235 - 20213.513: 97.4532% ( 3) 00:09:30.239 20213.513 - 20318.792: 97.4779% ( 2) 00:09:30.239 20318.792 - 20424.071: 97.5025% ( 2) 00:09:30.239 20424.071 - 20529.349: 97.5763% ( 6) 00:09:30.239 20529.349 - 20634.628: 97.6624% ( 7) 00:09:30.239 20634.628 - 20739.907: 97.7485% ( 7) 00:09:30.239 20739.907 - 20845.186: 97.8469% ( 8) 00:09:30.239 20845.186 - 20950.464: 97.9331% ( 7) 00:09:30.239 20950.464 - 21055.743: 97.9946% ( 5) 00:09:30.239 21055.743 - 21161.022: 98.0684% ( 6) 00:09:30.239 21161.022 - 21266.300: 98.1299% ( 5) 00:09:30.239 21266.300 - 21371.579: 98.1914% ( 5) 00:09:30.239 21371.579 - 21476.858: 98.2653% ( 6) 00:09:30.239 21476.858 - 21582.137: 98.3268% ( 5) 00:09:30.239 21582.137 - 21687.415: 98.3883% ( 5) 00:09:30.239 21687.415 - 21792.694: 98.4252% ( 3) 00:09:30.239 25161.613 - 25266.892: 98.4621% ( 3) 00:09:30.239 25266.892 - 25372.170: 98.4990% ( 3) 00:09:30.239 25372.170 - 25477.449: 98.5236% ( 2) 00:09:30.239 25477.449 - 25582.728: 98.5605% ( 3) 00:09:30.239 25582.728 - 25688.006: 98.5974% ( 3) 00:09:30.239 25688.006 - 25793.285: 98.6344% ( 3) 00:09:30.239 25793.285 - 25898.564: 98.6590% ( 2) 00:09:30.239 25898.564 - 26003.843: 98.6959% ( 3) 00:09:30.239 26003.843 - 26109.121: 98.7328% ( 3) 00:09:30.239 26109.121 - 26214.400: 98.7574% ( 2) 00:09:30.239 26214.400 - 26319.679: 98.7943% ( 3) 00:09:30.239 26319.679 - 26424.957: 98.8312% ( 3) 00:09:30.239 26424.957 - 26530.236: 98.8681% ( 3) 00:09:30.239 26530.236 - 26635.515: 98.8927% ( 2) 00:09:30.239 26635.515 - 26740.794: 98.9296% ( 3) 00:09:30.239 26740.794 - 26846.072: 98.9665% ( 3) 00:09:30.239 26846.072 - 26951.351: 98.9911% ( 2) 00:09:30.239 26951.351 - 27161.908: 99.0650% ( 6) 00:09:30.239 27161.908 - 27372.466: 99.1265% ( 5) 00:09:30.239 27372.466 - 27583.023: 99.1880% ( 5) 00:09:30.239 27583.023 - 27793.581: 99.2126% ( 2) 00:09:30.239 35794.763 - 36005.320: 99.2495% ( 3) 00:09:30.239 36005.320 - 36215.878: 99.3233% ( 6) 00:09:30.239 36215.878 - 36426.435: 99.3848% ( 5) 00:09:30.239 36426.435 - 36636.993: 99.4587% ( 6) 00:09:30.239 36636.993 - 36847.550: 99.5202% ( 5) 00:09:30.239 36847.550 - 37058.108: 99.5817% ( 5) 00:09:30.239 37058.108 - 37268.665: 99.6555% ( 6) 00:09:30.240 37268.665 - 37479.222: 99.7170% ( 5) 00:09:30.240 37479.222 - 37689.780: 99.7908% ( 6) 00:09:30.240 37689.780 - 37900.337: 99.8524% ( 5) 00:09:30.240 37900.337 - 38110.895: 99.9139% ( 5) 00:09:30.240 38110.895 - 38321.452: 99.9754% ( 5) 00:09:30.240 38321.452 - 38532.010: 100.0000% ( 2) 00:09:30.240 00:09:30.240 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:30.240 ============================================================================== 00:09:30.240 Range in us Cumulative IO count 00:09:30.240 8422.297 - 8474.937: 0.0369% ( 3) 00:09:30.240 8474.937 - 8527.576: 0.0492% ( 1) 00:09:30.240 8527.576 - 8580.215: 0.0738% ( 2) 00:09:30.240 8580.215 - 8632.855: 0.0984% ( 2) 00:09:30.240 8632.855 - 8685.494: 0.1230% ( 2) 00:09:30.240 8685.494 - 8738.133: 0.1353% ( 1) 00:09:30.240 8738.133 - 8790.773: 0.1599% ( 2) 00:09:30.240 8790.773 - 8843.412: 0.1845% ( 2) 00:09:30.240 8843.412 - 8896.051: 0.1969% ( 1) 00:09:30.240 8896.051 - 8948.691: 0.2215% ( 2) 00:09:30.240 8948.691 - 9001.330: 0.2461% ( 2) 00:09:30.240 9001.330 - 9053.969: 0.2707% ( 2) 00:09:30.240 9053.969 - 9106.609: 0.2953% ( 2) 00:09:30.240 9106.609 - 9159.248: 0.3076% ( 1) 00:09:30.240 9159.248 - 9211.888: 0.3322% ( 2) 00:09:30.240 9211.888 - 9264.527: 0.3568% ( 2) 00:09:30.240 9264.527 - 9317.166: 0.3814% ( 2) 00:09:30.240 9317.166 - 9369.806: 0.3937% ( 1) 00:09:30.240 9369.806 - 9422.445: 0.4183% ( 2) 00:09:30.240 9422.445 - 9475.084: 0.4552% ( 3) 00:09:30.240 9475.084 - 9527.724: 0.5167% ( 5) 00:09:30.240 9527.724 - 9580.363: 0.6521% ( 11) 00:09:30.240 9580.363 - 9633.002: 0.7751% ( 10) 00:09:30.240 9633.002 - 9685.642: 0.8612% ( 7) 00:09:30.240 9685.642 - 9738.281: 0.9104% ( 4) 00:09:30.240 9738.281 - 9790.920: 0.9596% ( 4) 00:09:30.240 9790.920 - 9843.560: 1.0212% ( 5) 00:09:30.240 9843.560 - 9896.199: 1.0704% ( 4) 00:09:30.240 9896.199 - 9948.839: 1.1196% ( 4) 00:09:30.240 9948.839 - 10001.478: 1.1811% ( 5) 00:09:30.240 10001.478 - 10054.117: 1.2426% ( 5) 00:09:30.240 10054.117 - 10106.757: 1.2918% ( 4) 00:09:30.240 10106.757 - 10159.396: 1.3656% ( 6) 00:09:30.240 10159.396 - 10212.035: 1.4272% ( 5) 00:09:30.240 10212.035 - 10264.675: 1.4887% ( 5) 00:09:30.240 10264.675 - 10317.314: 1.5502% ( 5) 00:09:30.240 10317.314 - 10369.953: 1.5748% ( 2) 00:09:30.240 10369.953 - 10422.593: 1.5994% ( 2) 00:09:30.240 10527.871 - 10580.511: 1.6486% ( 4) 00:09:30.240 10580.511 - 10633.150: 1.7101% ( 5) 00:09:30.240 10633.150 - 10685.790: 1.7717% ( 5) 00:09:30.240 10685.790 - 10738.429: 1.8701% ( 8) 00:09:30.240 10738.429 - 10791.068: 2.0300% ( 13) 00:09:30.240 10791.068 - 10843.708: 2.1654% ( 11) 00:09:30.240 10843.708 - 10896.347: 2.2638% ( 8) 00:09:30.240 10896.347 - 10948.986: 2.3745% ( 9) 00:09:30.240 10948.986 - 11001.626: 2.4237% ( 4) 00:09:30.240 11001.626 - 11054.265: 2.4729% ( 4) 00:09:30.240 11054.265 - 11106.904: 2.5468% ( 6) 00:09:30.240 11106.904 - 11159.544: 2.6575% ( 9) 00:09:30.240 11159.544 - 11212.183: 2.8174% ( 13) 00:09:30.240 11212.183 - 11264.822: 3.0266% ( 17) 00:09:30.240 11264.822 - 11317.462: 3.2849% ( 21) 00:09:30.240 11317.462 - 11370.101: 3.6294% ( 28) 00:09:30.240 11370.101 - 11422.741: 3.7402% ( 9) 00:09:30.240 11422.741 - 11475.380: 3.9001% ( 13) 00:09:30.240 11475.380 - 11528.019: 4.1462% ( 20) 00:09:30.240 11528.019 - 11580.659: 4.3553% ( 17) 00:09:30.240 11580.659 - 11633.298: 4.7367% ( 31) 00:09:30.240 11633.298 - 11685.937: 5.1919% ( 37) 00:09:30.240 11685.937 - 11738.577: 5.7333% ( 44) 00:09:30.240 11738.577 - 11791.216: 6.2746% ( 44) 00:09:30.240 11791.216 - 11843.855: 6.9144% ( 52) 00:09:30.240 11843.855 - 11896.495: 7.4311% ( 42) 00:09:30.240 11896.495 - 11949.134: 7.9724% ( 44) 00:09:30.240 11949.134 - 12001.773: 8.4892% ( 42) 00:09:30.240 12001.773 - 12054.413: 8.9321% ( 36) 00:09:30.240 12054.413 - 12107.052: 9.3750% ( 36) 00:09:30.240 12107.052 - 12159.692: 9.9040% ( 43) 00:09:30.240 12159.692 - 12212.331: 10.4208% ( 42) 00:09:30.240 12212.331 - 12264.970: 10.9621% ( 44) 00:09:30.240 12264.970 - 12317.610: 11.4296% ( 38) 00:09:30.240 12317.610 - 12370.249: 11.8233% ( 32) 00:09:30.240 12370.249 - 12422.888: 12.2908% ( 38) 00:09:30.240 12422.888 - 12475.528: 12.7707% ( 39) 00:09:30.240 12475.528 - 12528.167: 13.1767% ( 33) 00:09:30.240 12528.167 - 12580.806: 13.5458% ( 30) 00:09:30.240 12580.806 - 12633.446: 13.9026% ( 29) 00:09:30.240 12633.446 - 12686.085: 14.2470% ( 28) 00:09:30.240 12686.085 - 12738.724: 14.4931% ( 20) 00:09:30.240 12738.724 - 12791.364: 14.7515% ( 21) 00:09:30.240 12791.364 - 12844.003: 14.9852% ( 19) 00:09:30.240 12844.003 - 12896.643: 15.2436% ( 21) 00:09:30.240 12896.643 - 12949.282: 15.5512% ( 25) 00:09:30.240 12949.282 - 13001.921: 15.8834% ( 27) 00:09:30.240 13001.921 - 13054.561: 16.3017% ( 34) 00:09:30.240 13054.561 - 13107.200: 16.7323% ( 35) 00:09:30.240 13107.200 - 13159.839: 17.2121% ( 39) 00:09:30.240 13159.839 - 13212.479: 17.6796% ( 38) 00:09:30.240 13212.479 - 13265.118: 18.2456% ( 46) 00:09:30.240 13265.118 - 13317.757: 18.7131% ( 38) 00:09:30.240 13317.757 - 13370.397: 19.1929% ( 39) 00:09:30.240 13370.397 - 13423.036: 19.8819% ( 56) 00:09:30.240 13423.036 - 13475.676: 20.5955% ( 58) 00:09:30.240 13475.676 - 13580.954: 21.5551% ( 78) 00:09:30.240 13580.954 - 13686.233: 22.6378% ( 88) 00:09:30.240 13686.233 - 13791.512: 24.1511% ( 123) 00:09:30.240 13791.512 - 13896.790: 25.4552% ( 106) 00:09:30.240 13896.790 - 14002.069: 26.8701% ( 115) 00:09:30.240 14002.069 - 14107.348: 28.4818% ( 131) 00:09:30.240 14107.348 - 14212.627: 30.2534% ( 144) 00:09:30.240 14212.627 - 14317.905: 31.7913% ( 125) 00:09:30.240 14317.905 - 14423.184: 33.1816% ( 113) 00:09:30.240 14423.184 - 14528.463: 34.4857% ( 106) 00:09:30.240 14528.463 - 14633.741: 35.7160% ( 100) 00:09:30.240 14633.741 - 14739.020: 36.8971% ( 96) 00:09:30.240 14739.020 - 14844.299: 38.0659% ( 95) 00:09:30.240 14844.299 - 14949.578: 39.2470% ( 96) 00:09:30.240 14949.578 - 15054.856: 40.2805% ( 84) 00:09:30.240 15054.856 - 15160.135: 41.4001% ( 91) 00:09:30.240 15160.135 - 15265.414: 42.5443% ( 93) 00:09:30.240 15265.414 - 15370.692: 43.6393% ( 89) 00:09:30.240 15370.692 - 15475.971: 45.1033% ( 119) 00:09:30.240 15475.971 - 15581.250: 46.5182% ( 115) 00:09:30.240 15581.250 - 15686.529: 48.1176% ( 130) 00:09:30.240 15686.529 - 15791.807: 49.9385% ( 148) 00:09:30.240 15791.807 - 15897.086: 51.8578% ( 156) 00:09:30.240 15897.086 - 16002.365: 53.5679% ( 139) 00:09:30.240 16002.365 - 16107.643: 55.2534% ( 137) 00:09:30.240 16107.643 - 16212.922: 56.8529% ( 130) 00:09:30.240 16212.922 - 16318.201: 58.4031% ( 126) 00:09:30.240 16318.201 - 16423.480: 59.8548% ( 118) 00:09:30.240 16423.480 - 16528.758: 61.7495% ( 154) 00:09:30.240 16528.758 - 16634.037: 63.4473% ( 138) 00:09:30.240 16634.037 - 16739.316: 65.1329% ( 137) 00:09:30.240 16739.316 - 16844.594: 66.7815% ( 134) 00:09:30.240 16844.594 - 16949.873: 68.6885% ( 155) 00:09:30.240 16949.873 - 17055.152: 70.4232% ( 141) 00:09:30.240 17055.152 - 17160.431: 72.5148% ( 170) 00:09:30.240 17160.431 - 17265.709: 74.5571% ( 166) 00:09:30.240 17265.709 - 17370.988: 76.1811% ( 132) 00:09:30.240 17370.988 - 17476.267: 77.7805% ( 130) 00:09:30.240 17476.267 - 17581.545: 79.5276% ( 142) 00:09:30.240 17581.545 - 17686.824: 81.5945% ( 168) 00:09:30.240 17686.824 - 17792.103: 83.5261% ( 157) 00:09:30.240 17792.103 - 17897.382: 85.1132% ( 129) 00:09:30.240 17897.382 - 18002.660: 86.4419% ( 108) 00:09:30.240 18002.660 - 18107.939: 87.5246% ( 88) 00:09:30.240 18107.939 - 18213.218: 88.7057% ( 96) 00:09:30.240 18213.218 - 18318.496: 89.7884% ( 88) 00:09:30.240 18318.496 - 18423.775: 90.9080% ( 91) 00:09:30.240 18423.775 - 18529.054: 91.9414% ( 84) 00:09:30.240 18529.054 - 18634.333: 92.8765% ( 76) 00:09:30.240 18634.333 - 18739.611: 93.5039% ( 51) 00:09:30.240 18739.611 - 18844.890: 93.9961% ( 40) 00:09:30.240 18844.890 - 18950.169: 94.4882% ( 40) 00:09:30.240 18950.169 - 19055.447: 94.8819% ( 32) 00:09:30.240 19055.447 - 19160.726: 95.2756% ( 32) 00:09:30.240 19160.726 - 19266.005: 95.5832% ( 25) 00:09:30.240 19266.005 - 19371.284: 95.8784% ( 24) 00:09:30.240 19371.284 - 19476.562: 96.0876% ( 17) 00:09:30.241 19476.562 - 19581.841: 96.2352% ( 12) 00:09:30.241 19581.841 - 19687.120: 96.3952% ( 13) 00:09:30.241 19687.120 - 19792.398: 96.5920% ( 16) 00:09:30.241 19792.398 - 19897.677: 96.7643% ( 14) 00:09:30.241 19897.677 - 20002.956: 96.9119% ( 12) 00:09:30.241 20002.956 - 20108.235: 97.0965% ( 15) 00:09:30.241 20108.235 - 20213.513: 97.2810% ( 15) 00:09:30.241 20213.513 - 20318.792: 97.4286% ( 12) 00:09:30.241 20318.792 - 20424.071: 97.5763% ( 12) 00:09:30.241 20424.071 - 20529.349: 97.7239% ( 12) 00:09:30.241 20529.349 - 20634.628: 97.8346% ( 9) 00:09:30.241 20634.628 - 20739.907: 97.9454% ( 9) 00:09:30.241 20739.907 - 20845.186: 98.0315% ( 7) 00:09:30.241 20845.186 - 20950.464: 98.0807% ( 4) 00:09:30.241 20950.464 - 21055.743: 98.1299% ( 4) 00:09:30.241 21055.743 - 21161.022: 98.1914% ( 5) 00:09:30.241 21161.022 - 21266.300: 98.2406% ( 4) 00:09:30.241 21266.300 - 21371.579: 98.2899% ( 4) 00:09:30.241 21371.579 - 21476.858: 98.3514% ( 5) 00:09:30.241 21476.858 - 21582.137: 98.4006% ( 4) 00:09:30.241 21582.137 - 21687.415: 98.4252% ( 2) 00:09:30.241 24214.104 - 24319.383: 98.4744% ( 4) 00:09:30.241 24319.383 - 24424.662: 98.5605% ( 7) 00:09:30.241 24424.662 - 24529.941: 98.6097% ( 4) 00:09:30.241 24529.941 - 24635.219: 98.6959% ( 7) 00:09:30.241 24635.219 - 24740.498: 98.7205% ( 2) 00:09:30.241 24740.498 - 24845.777: 98.7451% ( 2) 00:09:30.241 24845.777 - 24951.055: 98.7697% ( 2) 00:09:30.241 24951.055 - 25056.334: 98.7943% ( 2) 00:09:30.241 25056.334 - 25161.613: 98.8189% ( 2) 00:09:30.241 25161.613 - 25266.892: 98.8435% ( 2) 00:09:30.241 25266.892 - 25372.170: 98.8681% ( 2) 00:09:30.241 25372.170 - 25477.449: 98.9050% ( 3) 00:09:30.241 25477.449 - 25582.728: 98.9296% ( 2) 00:09:30.241 25582.728 - 25688.006: 98.9665% ( 3) 00:09:30.241 25688.006 - 25793.285: 99.0034% ( 3) 00:09:30.241 25793.285 - 25898.564: 99.0404% ( 3) 00:09:30.241 25898.564 - 26003.843: 99.0650% ( 2) 00:09:30.241 26003.843 - 26109.121: 99.1019% ( 3) 00:09:30.241 26109.121 - 26214.400: 99.1388% ( 3) 00:09:30.241 26214.400 - 26319.679: 99.1634% ( 2) 00:09:30.241 26319.679 - 26424.957: 99.2003% ( 3) 00:09:30.241 26424.957 - 26530.236: 99.2126% ( 1) 00:09:30.241 34531.418 - 34741.976: 99.2372% ( 2) 00:09:30.241 34741.976 - 34952.533: 99.2987% ( 5) 00:09:30.241 34952.533 - 35163.091: 99.3602% ( 5) 00:09:30.241 35163.091 - 35373.648: 99.4218% ( 5) 00:09:30.241 35373.648 - 35584.206: 99.4956% ( 6) 00:09:30.241 35584.206 - 35794.763: 99.5571% ( 5) 00:09:30.241 35794.763 - 36005.320: 99.6186% ( 5) 00:09:30.241 36005.320 - 36215.878: 99.6801% ( 5) 00:09:30.241 36215.878 - 36426.435: 99.7539% ( 6) 00:09:30.241 36426.435 - 36636.993: 99.8155% ( 5) 00:09:30.241 36636.993 - 36847.550: 99.8770% ( 5) 00:09:30.241 36847.550 - 37058.108: 99.9508% ( 6) 00:09:30.241 37058.108 - 37268.665: 100.0000% ( 4) 00:09:30.241 00:09:30.241 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:30.241 ============================================================================== 00:09:30.241 Range in us Cumulative IO count 00:09:30.241 7422.149 - 7474.789: 0.0369% ( 3) 00:09:30.241 7474.789 - 7527.428: 0.0492% ( 1) 00:09:30.241 7527.428 - 7580.067: 0.0738% ( 2) 00:09:30.241 7580.067 - 7632.707: 0.0861% ( 1) 00:09:30.241 7632.707 - 7685.346: 0.1107% ( 2) 00:09:30.241 7685.346 - 7737.986: 0.1353% ( 2) 00:09:30.241 7737.986 - 7790.625: 0.1599% ( 2) 00:09:30.241 7790.625 - 7843.264: 0.1845% ( 2) 00:09:30.241 7843.264 - 7895.904: 0.1969% ( 1) 00:09:30.241 7895.904 - 7948.543: 0.2215% ( 2) 00:09:30.241 7948.543 - 8001.182: 0.2461% ( 2) 00:09:30.241 8001.182 - 8053.822: 0.2707% ( 2) 00:09:30.241 8053.822 - 8106.461: 0.2830% ( 1) 00:09:30.241 8106.461 - 8159.100: 0.3076% ( 2) 00:09:30.241 8159.100 - 8211.740: 0.3322% ( 2) 00:09:30.241 8211.740 - 8264.379: 0.3568% ( 2) 00:09:30.241 8264.379 - 8317.018: 0.3691% ( 1) 00:09:30.241 8317.018 - 8369.658: 0.3937% ( 2) 00:09:30.241 8369.658 - 8422.297: 0.4183% ( 2) 00:09:30.241 8422.297 - 8474.937: 0.4306% ( 1) 00:09:30.241 8474.937 - 8527.576: 0.4552% ( 2) 00:09:30.241 8527.576 - 8580.215: 0.4798% ( 2) 00:09:30.241 8580.215 - 8632.855: 0.5044% ( 2) 00:09:30.241 8632.855 - 8685.494: 0.5290% ( 2) 00:09:30.241 8685.494 - 8738.133: 0.5413% ( 1) 00:09:30.241 8738.133 - 8790.773: 0.5659% ( 2) 00:09:30.241 8790.773 - 8843.412: 0.5906% ( 2) 00:09:30.241 8843.412 - 8896.051: 0.6152% ( 2) 00:09:30.241 8896.051 - 8948.691: 0.6275% ( 1) 00:09:30.241 8948.691 - 9001.330: 0.6521% ( 2) 00:09:30.241 9001.330 - 9053.969: 0.6767% ( 2) 00:09:30.241 9053.969 - 9106.609: 0.6890% ( 1) 00:09:30.241 9106.609 - 9159.248: 0.7136% ( 2) 00:09:30.241 9159.248 - 9211.888: 0.7382% ( 2) 00:09:30.241 9211.888 - 9264.527: 0.7628% ( 2) 00:09:30.241 9264.527 - 9317.166: 0.7874% ( 2) 00:09:30.241 9527.724 - 9580.363: 0.7997% ( 1) 00:09:30.241 9580.363 - 9633.002: 0.8612% ( 5) 00:09:30.241 9633.002 - 9685.642: 0.9350% ( 6) 00:09:30.241 9685.642 - 9738.281: 1.0212% ( 7) 00:09:30.241 9738.281 - 9790.920: 1.0827% ( 5) 00:09:30.241 9790.920 - 9843.560: 1.1442% ( 5) 00:09:30.241 9843.560 - 9896.199: 1.2180% ( 6) 00:09:30.241 9896.199 - 9948.839: 1.2426% ( 2) 00:09:30.241 9948.839 - 10001.478: 1.2795% ( 3) 00:09:30.241 10001.478 - 10054.117: 1.3164% ( 3) 00:09:30.241 10054.117 - 10106.757: 1.3410% ( 2) 00:09:30.241 10106.757 - 10159.396: 1.3903% ( 4) 00:09:30.241 10159.396 - 10212.035: 1.4149% ( 2) 00:09:30.241 10212.035 - 10264.675: 1.4518% ( 3) 00:09:30.241 10264.675 - 10317.314: 1.4887% ( 3) 00:09:30.241 10317.314 - 10369.953: 1.5133% ( 2) 00:09:30.241 10369.953 - 10422.593: 1.5625% ( 4) 00:09:30.241 10422.593 - 10475.232: 1.5748% ( 1) 00:09:30.241 10633.150 - 10685.790: 1.5994% ( 2) 00:09:30.241 10685.790 - 10738.429: 1.7963% ( 16) 00:09:30.241 10738.429 - 10791.068: 1.9439% ( 12) 00:09:30.241 10791.068 - 10843.708: 2.0300% ( 7) 00:09:30.241 10843.708 - 10896.347: 2.1407% ( 9) 00:09:30.241 10896.347 - 10948.986: 2.2146% ( 6) 00:09:30.241 10948.986 - 11001.626: 2.2515% ( 3) 00:09:30.241 11001.626 - 11054.265: 2.3130% ( 5) 00:09:30.241 11054.265 - 11106.904: 2.3622% ( 4) 00:09:30.241 11106.904 - 11159.544: 2.5344% ( 14) 00:09:30.241 11159.544 - 11212.183: 2.7067% ( 14) 00:09:30.241 11212.183 - 11264.822: 2.9651% ( 21) 00:09:30.241 11264.822 - 11317.462: 3.2357% ( 22) 00:09:30.241 11317.462 - 11370.101: 3.5310% ( 24) 00:09:30.241 11370.101 - 11422.741: 3.8386% ( 25) 00:09:30.241 11422.741 - 11475.380: 4.1585% ( 26) 00:09:30.241 11475.380 - 11528.019: 4.4537% ( 24) 00:09:30.241 11528.019 - 11580.659: 4.7367% ( 23) 00:09:30.241 11580.659 - 11633.298: 5.1304% ( 32) 00:09:30.241 11633.298 - 11685.937: 5.6225% ( 40) 00:09:30.241 11685.937 - 11738.577: 6.1393% ( 42) 00:09:30.241 11738.577 - 11791.216: 6.7913% ( 53) 00:09:30.241 11791.216 - 11843.855: 7.4065% ( 50) 00:09:30.241 11843.855 - 11896.495: 8.0955% ( 56) 00:09:30.241 11896.495 - 11949.134: 8.6983% ( 49) 00:09:30.241 11949.134 - 12001.773: 9.1535% ( 37) 00:09:30.241 12001.773 - 12054.413: 9.8425% ( 56) 00:09:30.241 12054.413 - 12107.052: 10.2485% ( 33) 00:09:30.241 12107.052 - 12159.692: 10.6176% ( 30) 00:09:30.241 12159.692 - 12212.331: 11.0113% ( 32) 00:09:30.241 12212.331 - 12264.970: 11.4296% ( 34) 00:09:30.241 12264.970 - 12317.610: 11.8233% ( 32) 00:09:30.241 12317.610 - 12370.249: 12.2908% ( 38) 00:09:30.241 12370.249 - 12422.888: 12.7707% ( 39) 00:09:30.241 12422.888 - 12475.528: 13.1398% ( 30) 00:09:30.241 12475.528 - 12528.167: 13.5458% ( 33) 00:09:30.241 12528.167 - 12580.806: 13.9395% ( 32) 00:09:30.241 12580.806 - 12633.446: 14.3455% ( 33) 00:09:30.241 12633.446 - 12686.085: 14.7023% ( 29) 00:09:30.241 12686.085 - 12738.724: 15.0098% ( 25) 00:09:30.241 12738.724 - 12791.364: 15.3789% ( 30) 00:09:30.241 12791.364 - 12844.003: 15.6988% ( 26) 00:09:30.241 12844.003 - 12896.643: 15.9695% ( 22) 00:09:30.241 12896.643 - 12949.282: 16.3878% ( 34) 00:09:30.241 12949.282 - 13001.921: 16.6954% ( 25) 00:09:30.241 13001.921 - 13054.561: 17.0153% ( 26) 00:09:30.241 13054.561 - 13107.200: 17.4213% ( 33) 00:09:30.241 13107.200 - 13159.839: 17.7904% ( 30) 00:09:30.241 13159.839 - 13212.479: 18.1594% ( 30) 00:09:30.241 13212.479 - 13265.118: 18.4916% ( 27) 00:09:30.241 13265.118 - 13317.757: 18.9222% ( 35) 00:09:30.241 13317.757 - 13370.397: 19.4267% ( 41) 00:09:30.241 13370.397 - 13423.036: 19.8819% ( 37) 00:09:30.241 13423.036 - 13475.676: 20.3371% ( 37) 00:09:30.241 13475.676 - 13580.954: 21.5059% ( 95) 00:09:30.241 13580.954 - 13686.233: 22.7977% ( 105) 00:09:30.241 13686.233 - 13791.512: 24.4218% ( 132) 00:09:30.241 13791.512 - 13896.790: 25.9843% ( 127) 00:09:30.241 13896.790 - 14002.069: 27.4483% ( 119) 00:09:30.241 14002.069 - 14107.348: 28.7648% ( 107) 00:09:30.241 14107.348 - 14212.627: 29.7982% ( 84) 00:09:30.241 14212.627 - 14317.905: 30.8686% ( 87) 00:09:30.241 14317.905 - 14423.184: 31.9759% ( 90) 00:09:30.241 14423.184 - 14528.463: 32.9355% ( 78) 00:09:30.241 14528.463 - 14633.741: 34.1535% ( 99) 00:09:30.241 14633.741 - 14739.020: 35.4577% ( 106) 00:09:30.241 14739.020 - 14844.299: 37.0202% ( 127) 00:09:30.241 14844.299 - 14949.578: 38.3858% ( 111) 00:09:30.241 14949.578 - 15054.856: 39.5177% ( 92) 00:09:30.241 15054.856 - 15160.135: 41.0433% ( 124) 00:09:30.241 15160.135 - 15265.414: 42.5935% ( 126) 00:09:30.241 15265.414 - 15370.692: 44.2052% ( 131) 00:09:30.241 15370.692 - 15475.971: 45.9154% ( 139) 00:09:30.241 15475.971 - 15581.250: 47.5271% ( 131) 00:09:30.241 15581.250 - 15686.529: 48.8681% ( 109) 00:09:30.241 15686.529 - 15791.807: 50.1353% ( 103) 00:09:30.241 15791.807 - 15897.086: 51.6732% ( 125) 00:09:30.241 15897.086 - 16002.365: 53.6048% ( 157) 00:09:30.241 16002.365 - 16107.643: 55.2781% ( 136) 00:09:30.241 16107.643 - 16212.922: 56.8159% ( 125) 00:09:30.242 16212.922 - 16318.201: 58.3538% ( 125) 00:09:30.242 16318.201 - 16423.480: 60.1747% ( 148) 00:09:30.242 16423.480 - 16528.758: 62.4877% ( 188) 00:09:30.242 16528.758 - 16634.037: 64.8130% ( 189) 00:09:30.242 16634.037 - 16739.316: 66.9045% ( 170) 00:09:30.242 16739.316 - 16844.594: 68.8976% ( 162) 00:09:30.242 16844.594 - 16949.873: 70.8784% ( 161) 00:09:30.242 16949.873 - 17055.152: 72.5394% ( 135) 00:09:30.242 17055.152 - 17160.431: 74.1511% ( 131) 00:09:30.242 17160.431 - 17265.709: 75.9104% ( 143) 00:09:30.242 17265.709 - 17370.988: 77.7190% ( 147) 00:09:30.242 17370.988 - 17476.267: 79.2200% ( 122) 00:09:30.242 17476.267 - 17581.545: 80.6841% ( 119) 00:09:30.242 17581.545 - 17686.824: 81.9882% ( 106) 00:09:30.242 17686.824 - 17792.103: 83.2800% ( 105) 00:09:30.242 17792.103 - 17897.382: 84.7810% ( 122) 00:09:30.242 17897.382 - 18002.660: 86.0482% ( 103) 00:09:30.242 18002.660 - 18107.939: 87.2539% ( 98) 00:09:30.242 18107.939 - 18213.218: 88.1890% ( 76) 00:09:30.242 18213.218 - 18318.496: 89.0502% ( 70) 00:09:30.242 18318.496 - 18423.775: 89.8253% ( 63) 00:09:30.242 18423.775 - 18529.054: 90.5881% ( 62) 00:09:30.242 18529.054 - 18634.333: 91.5969% ( 82) 00:09:30.242 18634.333 - 18739.611: 92.3351% ( 60) 00:09:30.242 18739.611 - 18844.890: 92.9995% ( 54) 00:09:30.242 18844.890 - 18950.169: 93.6393% ( 52) 00:09:30.242 18950.169 - 19055.447: 94.1929% ( 45) 00:09:30.242 19055.447 - 19160.726: 94.7219% ( 43) 00:09:30.242 19160.726 - 19266.005: 95.1280% ( 33) 00:09:30.242 19266.005 - 19371.284: 95.4232% ( 24) 00:09:30.242 19371.284 - 19476.562: 95.6693% ( 20) 00:09:30.242 19476.562 - 19581.841: 95.8907% ( 18) 00:09:30.242 19581.841 - 19687.120: 96.0015% ( 9) 00:09:30.242 19687.120 - 19792.398: 96.1368% ( 11) 00:09:30.242 19792.398 - 19897.677: 96.3337% ( 16) 00:09:30.242 19897.677 - 20002.956: 96.6166% ( 23) 00:09:30.242 20002.956 - 20108.235: 96.8258% ( 17) 00:09:30.242 20108.235 - 20213.513: 97.0349% ( 17) 00:09:30.242 20213.513 - 20318.792: 97.2687% ( 19) 00:09:30.242 20318.792 - 20424.071: 97.4409% ( 14) 00:09:30.242 20424.071 - 20529.349: 97.6378% ( 16) 00:09:30.242 20529.349 - 20634.628: 97.7977% ( 13) 00:09:30.242 20634.628 - 20739.907: 97.9577% ( 13) 00:09:30.242 20739.907 - 20845.186: 98.0561% ( 8) 00:09:30.242 20845.186 - 20950.464: 98.1668% ( 9) 00:09:30.242 20950.464 - 21055.743: 98.2776% ( 9) 00:09:30.242 21055.743 - 21161.022: 98.3391% ( 5) 00:09:30.242 21161.022 - 21266.300: 98.4006% ( 5) 00:09:30.242 21266.300 - 21371.579: 98.4252% ( 2) 00:09:30.242 23161.317 - 23266.596: 98.4498% ( 2) 00:09:30.242 23266.596 - 23371.875: 98.4867% ( 3) 00:09:30.242 23371.875 - 23477.153: 98.5359% ( 4) 00:09:30.242 23477.153 - 23582.432: 98.5851% ( 4) 00:09:30.242 23582.432 - 23687.711: 98.6097% ( 2) 00:09:30.242 23687.711 - 23792.990: 98.6836% ( 6) 00:09:30.242 23792.990 - 23898.268: 98.7205% ( 3) 00:09:30.242 23898.268 - 24003.547: 98.7697% ( 4) 00:09:30.242 24003.547 - 24108.826: 98.8189% ( 4) 00:09:30.242 24108.826 - 24214.104: 98.8681% ( 4) 00:09:30.242 24214.104 - 24319.383: 98.8927% ( 2) 00:09:30.242 24319.383 - 24424.662: 98.9173% ( 2) 00:09:30.242 24424.662 - 24529.941: 98.9542% ( 3) 00:09:30.242 24529.941 - 24635.219: 98.9788% ( 2) 00:09:30.242 24635.219 - 24740.498: 99.0157% ( 3) 00:09:30.242 24740.498 - 24845.777: 99.0404% ( 2) 00:09:30.242 24845.777 - 24951.055: 99.0773% ( 3) 00:09:30.242 24951.055 - 25056.334: 99.1019% ( 2) 00:09:30.242 25056.334 - 25161.613: 99.1265% ( 2) 00:09:30.242 25161.613 - 25266.892: 99.1511% ( 2) 00:09:30.242 25266.892 - 25372.170: 99.1757% ( 2) 00:09:30.242 25372.170 - 25477.449: 99.2126% ( 3) 00:09:30.242 32425.844 - 32636.402: 99.2741% ( 5) 00:09:30.242 33689.189 - 33899.746: 99.2987% ( 2) 00:09:30.242 33899.746 - 34110.304: 99.3602% ( 5) 00:09:30.242 34110.304 - 34320.861: 99.4218% ( 5) 00:09:30.242 34320.861 - 34531.418: 99.4833% ( 5) 00:09:30.242 34531.418 - 34741.976: 99.5448% ( 5) 00:09:30.242 34741.976 - 34952.533: 99.6186% ( 6) 00:09:30.242 34952.533 - 35163.091: 99.6801% ( 5) 00:09:30.242 35163.091 - 35373.648: 99.7539% ( 6) 00:09:30.242 35373.648 - 35584.206: 99.8155% ( 5) 00:09:30.242 35584.206 - 35794.763: 99.8770% ( 5) 00:09:30.242 35794.763 - 36005.320: 99.9508% ( 6) 00:09:30.242 36005.320 - 36215.878: 100.0000% ( 4) 00:09:30.242 00:09:30.242 15:09:28 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:30.242 00:09:30.242 real 0m2.703s 00:09:30.242 user 0m2.277s 00:09:30.242 sys 0m0.321s 00:09:30.242 15:09:28 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.242 15:09:28 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:30.242 ************************************ 00:09:30.242 END TEST nvme_perf 00:09:30.242 ************************************ 00:09:30.242 15:09:28 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:30.242 15:09:28 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:30.242 15:09:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.242 15:09:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:30.242 ************************************ 00:09:30.242 START TEST nvme_hello_world 00:09:30.242 ************************************ 00:09:30.242 15:09:28 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:30.500 Initializing NVMe Controllers 00:09:30.500 Attached to 0000:00:10.0 00:09:30.500 Namespace ID: 1 size: 6GB 00:09:30.500 Attached to 0000:00:11.0 00:09:30.500 Namespace ID: 1 size: 5GB 00:09:30.500 Attached to 0000:00:13.0 00:09:30.500 Namespace ID: 1 size: 1GB 00:09:30.500 Attached to 0000:00:12.0 00:09:30.500 Namespace ID: 1 size: 4GB 00:09:30.500 Namespace ID: 2 size: 4GB 00:09:30.500 Namespace ID: 3 size: 4GB 00:09:30.500 Initialization complete. 00:09:30.500 INFO: using host memory buffer for IO 00:09:30.500 Hello world! 00:09:30.500 INFO: using host memory buffer for IO 00:09:30.500 Hello world! 00:09:30.500 INFO: using host memory buffer for IO 00:09:30.500 Hello world! 00:09:30.500 INFO: using host memory buffer for IO 00:09:30.500 Hello world! 00:09:30.500 INFO: using host memory buffer for IO 00:09:30.500 Hello world! 00:09:30.500 INFO: using host memory buffer for IO 00:09:30.500 Hello world! 00:09:30.500 00:09:30.500 real 0m0.282s 00:09:30.500 user 0m0.094s 00:09:30.500 sys 0m0.139s 00:09:30.500 15:09:28 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.500 15:09:28 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:30.500 ************************************ 00:09:30.500 END TEST nvme_hello_world 00:09:30.500 ************************************ 00:09:30.500 15:09:28 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:30.500 15:09:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:30.500 15:09:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.501 15:09:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:30.501 ************************************ 00:09:30.501 START TEST nvme_sgl 00:09:30.501 ************************************ 00:09:30.501 15:09:28 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:30.759 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:30.759 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:30.759 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:30.759 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:30.759 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:30.759 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:30.759 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:30.759 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:30.759 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:30.759 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:30.759 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:30.759 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:30.759 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:30.759 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:30.759 NVMe Readv/Writev Request test 00:09:30.759 Attached to 0000:00:10.0 00:09:30.759 Attached to 0000:00:11.0 00:09:30.759 Attached to 0000:00:13.0 00:09:30.759 Attached to 0000:00:12.0 00:09:30.759 0000:00:10.0: build_io_request_2 test passed 00:09:30.759 0000:00:10.0: build_io_request_4 test passed 00:09:30.759 0000:00:10.0: build_io_request_5 test passed 00:09:30.759 0000:00:10.0: build_io_request_6 test passed 00:09:30.759 0000:00:10.0: build_io_request_7 test passed 00:09:30.759 0000:00:10.0: build_io_request_10 test passed 00:09:30.759 0000:00:11.0: build_io_request_2 test passed 00:09:30.759 0000:00:11.0: build_io_request_4 test passed 00:09:30.759 0000:00:11.0: build_io_request_5 test passed 00:09:30.759 0000:00:11.0: build_io_request_6 test passed 00:09:30.759 0000:00:11.0: build_io_request_7 test passed 00:09:30.759 0000:00:11.0: build_io_request_10 test passed 00:09:30.759 Cleaning up... 00:09:30.759 00:09:30.759 real 0m0.314s 00:09:30.759 user 0m0.130s 00:09:30.759 sys 0m0.142s 00:09:30.759 15:09:29 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.759 15:09:29 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:30.759 ************************************ 00:09:30.759 END TEST nvme_sgl 00:09:30.759 ************************************ 00:09:31.018 15:09:29 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:31.018 15:09:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.018 15:09:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.018 15:09:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.018 ************************************ 00:09:31.018 START TEST nvme_e2edp 00:09:31.018 ************************************ 00:09:31.018 15:09:29 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:31.276 NVMe Write/Read with End-to-End data protection test 00:09:31.276 Attached to 0000:00:10.0 00:09:31.276 Attached to 0000:00:11.0 00:09:31.276 Attached to 0000:00:13.0 00:09:31.276 Attached to 0000:00:12.0 00:09:31.276 Cleaning up... 00:09:31.276 00:09:31.276 real 0m0.274s 00:09:31.276 user 0m0.104s 00:09:31.276 sys 0m0.127s 00:09:31.276 15:09:29 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.276 ************************************ 00:09:31.276 END TEST nvme_e2edp 00:09:31.276 ************************************ 00:09:31.276 15:09:29 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:31.276 15:09:29 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:31.276 15:09:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.276 15:09:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.276 15:09:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.276 ************************************ 00:09:31.276 START TEST nvme_reserve 00:09:31.276 ************************************ 00:09:31.276 15:09:29 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:31.534 ===================================================== 00:09:31.534 NVMe Controller at PCI bus 0, device 16, function 0 00:09:31.534 ===================================================== 00:09:31.534 Reservations: Not Supported 00:09:31.534 ===================================================== 00:09:31.534 NVMe Controller at PCI bus 0, device 17, function 0 00:09:31.534 ===================================================== 00:09:31.534 Reservations: Not Supported 00:09:31.534 ===================================================== 00:09:31.534 NVMe Controller at PCI bus 0, device 19, function 0 00:09:31.534 ===================================================== 00:09:31.534 Reservations: Not Supported 00:09:31.534 ===================================================== 00:09:31.534 NVMe Controller at PCI bus 0, device 18, function 0 00:09:31.534 ===================================================== 00:09:31.534 Reservations: Not Supported 00:09:31.534 Reservation test passed 00:09:31.534 00:09:31.534 real 0m0.282s 00:09:31.534 user 0m0.091s 00:09:31.534 sys 0m0.145s 00:09:31.534 15:09:29 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.534 ************************************ 00:09:31.534 END TEST nvme_reserve 00:09:31.534 ************************************ 00:09:31.534 15:09:29 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:31.534 15:09:30 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:31.534 15:09:30 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.534 15:09:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.534 15:09:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.534 ************************************ 00:09:31.534 START TEST nvme_err_injection 00:09:31.534 ************************************ 00:09:31.534 15:09:30 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:32.100 NVMe Error Injection test 00:09:32.100 Attached to 0000:00:10.0 00:09:32.100 Attached to 0000:00:11.0 00:09:32.100 Attached to 0000:00:13.0 00:09:32.100 Attached to 0000:00:12.0 00:09:32.100 0000:00:11.0: get features failed as expected 00:09:32.100 0000:00:13.0: get features failed as expected 00:09:32.100 0000:00:12.0: get features failed as expected 00:09:32.100 0000:00:10.0: get features failed as expected 00:09:32.100 0000:00:10.0: get features successfully as expected 00:09:32.100 0000:00:11.0: get features successfully as expected 00:09:32.100 0000:00:13.0: get features successfully as expected 00:09:32.100 0000:00:12.0: get features successfully as expected 00:09:32.100 0000:00:10.0: read failed as expected 00:09:32.100 0000:00:11.0: read failed as expected 00:09:32.100 0000:00:13.0: read failed as expected 00:09:32.100 0000:00:12.0: read failed as expected 00:09:32.100 0000:00:10.0: read successfully as expected 00:09:32.100 0000:00:11.0: read successfully as expected 00:09:32.100 0000:00:13.0: read successfully as expected 00:09:32.100 0000:00:12.0: read successfully as expected 00:09:32.100 Cleaning up... 00:09:32.100 00:09:32.100 real 0m0.334s 00:09:32.100 user 0m0.125s 00:09:32.100 sys 0m0.152s 00:09:32.100 15:09:30 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.100 15:09:30 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:32.100 ************************************ 00:09:32.100 END TEST nvme_err_injection 00:09:32.100 ************************************ 00:09:32.100 15:09:30 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:32.100 15:09:30 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:32.100 15:09:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.100 15:09:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.100 ************************************ 00:09:32.100 START TEST nvme_overhead 00:09:32.100 ************************************ 00:09:32.100 15:09:30 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:33.475 Initializing NVMe Controllers 00:09:33.475 Attached to 0000:00:10.0 00:09:33.475 Attached to 0000:00:11.0 00:09:33.475 Attached to 0000:00:13.0 00:09:33.475 Attached to 0000:00:12.0 00:09:33.475 Initialization complete. Launching workers. 00:09:33.475 submit (in ns) avg, min, max = 13996.2, 11661.8, 69325.3 00:09:33.475 complete (in ns) avg, min, max = 9174.3, 8150.2, 129209.6 00:09:33.475 00:09:33.475 Submit histogram 00:09:33.475 ================ 00:09:33.475 Range in us Cumulative Count 00:09:33.475 11.618 - 11.669: 0.0083% ( 1) 00:09:33.475 12.132 - 12.183: 0.0165% ( 1) 00:09:33.475 12.337 - 12.389: 0.0248% ( 1) 00:09:33.475 12.543 - 12.594: 0.0330% ( 1) 00:09:33.475 12.800 - 12.851: 0.0413% ( 1) 00:09:33.475 12.903 - 12.954: 0.0825% ( 5) 00:09:33.475 12.954 - 13.006: 0.4127% ( 40) 00:09:33.475 13.006 - 13.057: 1.4444% ( 125) 00:09:33.475 13.057 - 13.108: 3.7471% ( 279) 00:09:33.475 13.108 - 13.160: 7.9729% ( 512) 00:09:33.475 13.160 - 13.263: 19.5939% ( 1408) 00:09:33.475 13.263 - 13.365: 30.5134% ( 1323) 00:09:33.475 13.365 - 13.468: 40.2195% ( 1176) 00:09:33.475 13.468 - 13.571: 49.8762% ( 1170) 00:09:33.475 13.571 - 13.674: 58.7157% ( 1071) 00:09:33.475 13.674 - 13.777: 65.5992% ( 834) 00:09:33.475 13.777 - 13.880: 71.8306% ( 755) 00:09:33.475 13.880 - 13.982: 77.1459% ( 644) 00:09:33.475 13.982 - 14.085: 81.4708% ( 524) 00:09:33.475 14.085 - 14.188: 85.1106% ( 441) 00:09:33.475 14.188 - 14.291: 87.6527% ( 308) 00:09:33.475 14.291 - 14.394: 89.8069% ( 261) 00:09:33.475 14.394 - 14.496: 91.4081% ( 194) 00:09:33.475 14.496 - 14.599: 92.6296% ( 148) 00:09:33.475 14.599 - 14.702: 93.3972% ( 93) 00:09:33.475 14.702 - 14.805: 93.9501% ( 67) 00:09:33.475 14.805 - 14.908: 94.3793% ( 52) 00:09:33.475 14.908 - 15.010: 94.6187% ( 29) 00:09:33.475 15.010 - 15.113: 94.7095% ( 11) 00:09:33.475 15.113 - 15.216: 94.7755% ( 8) 00:09:33.475 15.216 - 15.319: 94.8498% ( 9) 00:09:33.475 15.319 - 15.422: 94.8580% ( 1) 00:09:33.475 15.422 - 15.524: 94.8745% ( 2) 00:09:33.475 15.524 - 15.627: 94.8911% ( 2) 00:09:33.475 15.627 - 15.730: 94.8993% ( 1) 00:09:33.475 15.730 - 15.833: 94.9323% ( 4) 00:09:33.475 15.833 - 15.936: 94.9571% ( 3) 00:09:33.475 15.936 - 16.039: 94.9818% ( 3) 00:09:33.475 16.039 - 16.141: 95.0231% ( 5) 00:09:33.475 16.141 - 16.244: 95.0974% ( 9) 00:09:33.475 16.244 - 16.347: 95.1139% ( 2) 00:09:33.475 16.347 - 16.450: 95.1882% ( 9) 00:09:33.475 16.450 - 16.553: 95.2294% ( 5) 00:09:33.475 16.553 - 16.655: 95.2955% ( 8) 00:09:33.475 16.655 - 16.758: 95.3120% ( 2) 00:09:33.475 16.758 - 16.861: 95.3533% ( 5) 00:09:33.475 16.861 - 16.964: 95.3780% ( 3) 00:09:33.475 16.964 - 17.067: 95.4193% ( 5) 00:09:33.475 17.067 - 17.169: 95.4605% ( 5) 00:09:33.475 17.169 - 17.272: 95.5431% ( 10) 00:09:33.475 17.272 - 17.375: 95.6504% ( 13) 00:09:33.475 17.375 - 17.478: 95.6834% ( 4) 00:09:33.475 17.478 - 17.581: 95.7824% ( 12) 00:09:33.475 17.581 - 17.684: 95.8485% ( 8) 00:09:33.475 17.684 - 17.786: 95.9393% ( 11) 00:09:33.475 17.786 - 17.889: 96.0300% ( 11) 00:09:33.475 17.889 - 17.992: 96.0796% ( 6) 00:09:33.475 17.992 - 18.095: 96.1538% ( 9) 00:09:33.475 18.095 - 18.198: 96.2529% ( 12) 00:09:33.475 18.198 - 18.300: 96.3437% ( 11) 00:09:33.475 18.300 - 18.403: 96.4675% ( 15) 00:09:33.475 18.403 - 18.506: 96.5418% ( 9) 00:09:33.475 18.506 - 18.609: 96.6078% ( 8) 00:09:33.475 18.609 - 18.712: 96.7233% ( 14) 00:09:33.475 18.712 - 18.814: 96.8306% ( 13) 00:09:33.475 18.814 - 18.917: 96.9462% ( 14) 00:09:33.475 18.917 - 19.020: 97.0370% ( 11) 00:09:33.475 19.020 - 19.123: 97.0865% ( 6) 00:09:33.475 19.123 - 19.226: 97.1525% ( 8) 00:09:33.475 19.226 - 19.329: 97.2020% ( 6) 00:09:33.475 19.329 - 19.431: 97.2598% ( 7) 00:09:33.475 19.431 - 19.534: 97.3093% ( 6) 00:09:33.475 19.534 - 19.637: 97.3506% ( 5) 00:09:33.475 19.637 - 19.740: 97.3671% ( 2) 00:09:33.475 19.740 - 19.843: 97.3836% ( 2) 00:09:33.475 19.843 - 19.945: 97.4249% ( 5) 00:09:33.475 19.945 - 20.048: 97.4331% ( 1) 00:09:33.475 20.048 - 20.151: 97.4744% ( 5) 00:09:33.475 20.151 - 20.254: 97.4909% ( 2) 00:09:33.475 20.254 - 20.357: 97.5157% ( 3) 00:09:33.475 20.357 - 20.459: 97.5900% ( 9) 00:09:33.475 20.459 - 20.562: 97.6477% ( 7) 00:09:33.475 20.562 - 20.665: 97.7385% ( 11) 00:09:33.475 20.665 - 20.768: 97.7715% ( 4) 00:09:33.475 20.768 - 20.871: 97.8458% ( 9) 00:09:33.475 20.871 - 20.973: 97.9119% ( 8) 00:09:33.475 20.973 - 21.076: 97.9861% ( 9) 00:09:33.475 21.076 - 21.179: 98.0522% ( 8) 00:09:33.475 21.179 - 21.282: 98.1017% ( 6) 00:09:33.475 21.282 - 21.385: 98.1842% ( 10) 00:09:33.475 21.385 - 21.488: 98.2420% ( 7) 00:09:33.475 21.488 - 21.590: 98.3575% ( 14) 00:09:33.475 21.590 - 21.693: 98.3906% ( 4) 00:09:33.475 21.693 - 21.796: 98.4648% ( 9) 00:09:33.475 21.796 - 21.899: 98.5309% ( 8) 00:09:33.475 21.899 - 22.002: 98.5804% ( 6) 00:09:33.475 22.002 - 22.104: 98.6547% ( 9) 00:09:33.475 22.104 - 22.207: 98.6794% ( 3) 00:09:33.475 22.207 - 22.310: 98.7290% ( 6) 00:09:33.475 22.310 - 22.413: 98.8032% ( 9) 00:09:33.475 22.413 - 22.516: 98.8280% ( 3) 00:09:33.475 22.516 - 22.618: 98.8858% ( 7) 00:09:33.475 22.618 - 22.721: 98.9435% ( 7) 00:09:33.475 22.721 - 22.824: 98.9601% ( 2) 00:09:33.475 22.824 - 22.927: 98.9848% ( 3) 00:09:33.475 22.927 - 23.030: 99.0343% ( 6) 00:09:33.475 23.030 - 23.133: 99.0839% ( 6) 00:09:33.475 23.133 - 23.235: 99.1086% ( 3) 00:09:33.475 23.235 - 23.338: 99.1581% ( 6) 00:09:33.475 23.338 - 23.441: 99.1746% ( 2) 00:09:33.475 23.441 - 23.544: 99.1912% ( 2) 00:09:33.475 23.544 - 23.647: 99.2159% ( 3) 00:09:33.475 23.647 - 23.749: 99.2324% ( 2) 00:09:33.475 23.749 - 23.852: 99.2737% ( 5) 00:09:33.475 23.852 - 23.955: 99.3067% ( 4) 00:09:33.475 23.955 - 24.058: 99.3232% ( 2) 00:09:33.475 24.058 - 24.161: 99.3397% ( 2) 00:09:33.475 24.161 - 24.263: 99.3645% ( 3) 00:09:33.475 24.263 - 24.366: 99.3810% ( 2) 00:09:33.475 24.366 - 24.469: 99.3975% ( 2) 00:09:33.475 24.469 - 24.572: 99.4305% ( 4) 00:09:33.475 24.572 - 24.675: 99.4553% ( 3) 00:09:33.475 24.675 - 24.778: 99.4718% ( 2) 00:09:33.475 24.778 - 24.880: 99.4800% ( 1) 00:09:33.475 24.880 - 24.983: 99.4965% ( 2) 00:09:33.475 24.983 - 25.086: 99.5048% ( 1) 00:09:33.475 25.086 - 25.189: 99.5295% ( 3) 00:09:33.475 25.292 - 25.394: 99.5378% ( 1) 00:09:33.475 25.497 - 25.600: 99.5791% ( 5) 00:09:33.475 25.600 - 25.703: 99.5956% ( 2) 00:09:33.475 25.806 - 25.908: 99.6121% ( 2) 00:09:33.475 26.011 - 26.114: 99.6286% ( 2) 00:09:33.475 26.114 - 26.217: 99.6534% ( 3) 00:09:33.475 26.320 - 26.525: 99.6781% ( 3) 00:09:33.475 26.525 - 26.731: 99.7111% ( 4) 00:09:33.475 26.937 - 27.142: 99.7194% ( 1) 00:09:33.475 27.142 - 27.348: 99.7276% ( 1) 00:09:33.475 27.348 - 27.553: 99.7441% ( 2) 00:09:33.475 27.759 - 27.965: 99.7524% ( 1) 00:09:33.475 27.965 - 28.170: 99.7606% ( 1) 00:09:33.475 28.170 - 28.376: 99.7772% ( 2) 00:09:33.475 28.376 - 28.582: 99.7854% ( 1) 00:09:33.475 28.787 - 28.993: 99.7937% ( 1) 00:09:33.475 28.993 - 29.198: 99.8019% ( 1) 00:09:33.475 30.021 - 30.227: 99.8102% ( 1) 00:09:33.475 30.227 - 30.432: 99.8349% ( 3) 00:09:33.475 31.666 - 31.871: 99.8514% ( 2) 00:09:33.475 33.105 - 33.311: 99.8597% ( 1) 00:09:33.475 33.516 - 33.722: 99.8679% ( 1) 00:09:33.475 33.928 - 34.133: 99.8762% ( 1) 00:09:33.475 35.367 - 35.573: 99.8845% ( 1) 00:09:33.475 35.778 - 35.984: 99.8927% ( 1) 00:09:33.475 36.601 - 36.806: 99.9092% ( 2) 00:09:33.475 36.806 - 37.012: 99.9175% ( 1) 00:09:33.475 37.835 - 38.040: 99.9257% ( 1) 00:09:33.475 38.040 - 38.246: 99.9340% ( 1) 00:09:33.475 38.657 - 38.863: 99.9422% ( 1) 00:09:33.475 38.863 - 39.068: 99.9505% ( 1) 00:09:33.475 39.891 - 40.096: 99.9587% ( 1) 00:09:33.475 41.741 - 41.947: 99.9670% ( 1) 00:09:33.475 43.386 - 43.592: 99.9752% ( 1) 00:09:33.475 50.172 - 50.378: 99.9835% ( 1) 00:09:33.475 61.276 - 61.687: 99.9917% ( 1) 00:09:33.475 69.089 - 69.500: 100.0000% ( 1) 00:09:33.475 00:09:33.475 Complete histogram 00:09:33.475 ================== 00:09:33.475 Range in us Cumulative Count 00:09:33.475 8.122 - 8.173: 0.0248% ( 3) 00:09:33.475 8.173 - 8.225: 0.0908% ( 8) 00:09:33.476 8.225 - 8.276: 0.1981% ( 13) 00:09:33.476 8.276 - 8.328: 1.1307% ( 113) 00:09:33.476 8.328 - 8.379: 5.1832% ( 491) 00:09:33.476 8.379 - 8.431: 12.2565% ( 857) 00:09:33.476 8.431 - 8.482: 20.4028% ( 987) 00:09:33.476 8.482 - 8.533: 28.7224% ( 1008) 00:09:33.476 8.533 - 8.585: 36.0185% ( 884) 00:09:33.476 8.585 - 8.636: 42.3077% ( 762) 00:09:33.476 8.636 - 8.688: 47.7633% ( 661) 00:09:33.476 8.688 - 8.739: 52.4265% ( 565) 00:09:33.476 8.739 - 8.790: 56.2314% ( 461) 00:09:33.476 8.790 - 8.842: 59.3678% ( 380) 00:09:33.476 8.842 - 8.893: 62.1740% ( 340) 00:09:33.476 8.893 - 8.945: 64.2869% ( 256) 00:09:33.476 8.945 - 8.996: 66.2595% ( 239) 00:09:33.476 8.996 - 9.047: 67.8112% ( 188) 00:09:33.476 9.047 - 9.099: 69.0822% ( 154) 00:09:33.476 9.099 - 9.150: 70.1717% ( 132) 00:09:33.476 9.150 - 9.202: 71.3107% ( 138) 00:09:33.476 9.202 - 9.253: 72.2681% ( 116) 00:09:33.476 9.253 - 9.304: 73.5226% ( 152) 00:09:33.476 9.304 - 9.356: 74.8102% ( 156) 00:09:33.476 9.356 - 9.407: 76.0977% ( 156) 00:09:33.476 9.407 - 9.459: 77.4926% ( 169) 00:09:33.476 9.459 - 9.510: 78.9204% ( 173) 00:09:33.476 9.510 - 9.561: 80.4226% ( 182) 00:09:33.476 9.561 - 9.613: 81.9660% ( 187) 00:09:33.476 9.613 - 9.664: 83.4846% ( 184) 00:09:33.476 9.664 - 9.716: 84.6732% ( 144) 00:09:33.476 9.716 - 9.767: 85.8039% ( 137) 00:09:33.476 9.767 - 9.818: 87.1410% ( 162) 00:09:33.476 9.818 - 9.870: 88.2965% ( 140) 00:09:33.476 9.870 - 9.921: 89.4602% ( 141) 00:09:33.476 9.921 - 9.973: 90.3516% ( 108) 00:09:33.476 9.973 - 10.024: 91.2347% ( 107) 00:09:33.476 10.024 - 10.076: 91.9610% ( 88) 00:09:33.476 10.076 - 10.127: 92.5553% ( 72) 00:09:33.476 10.127 - 10.178: 93.1083% ( 67) 00:09:33.476 10.178 - 10.230: 93.6118% ( 61) 00:09:33.476 10.230 - 10.281: 94.0822% ( 57) 00:09:33.476 10.281 - 10.333: 94.6434% ( 68) 00:09:33.476 10.333 - 10.384: 95.0396% ( 48) 00:09:33.476 10.384 - 10.435: 95.4110% ( 45) 00:09:33.476 10.435 - 10.487: 95.6669% ( 31) 00:09:33.476 10.487 - 10.538: 95.9062% ( 29) 00:09:33.476 10.538 - 10.590: 96.1621% ( 31) 00:09:33.476 10.590 - 10.641: 96.3684% ( 25) 00:09:33.476 10.641 - 10.692: 96.5830% ( 26) 00:09:33.476 10.692 - 10.744: 96.7233% ( 17) 00:09:33.476 10.744 - 10.795: 96.8141% ( 11) 00:09:33.476 10.795 - 10.847: 96.9132% ( 12) 00:09:33.476 10.847 - 10.898: 97.0287% ( 14) 00:09:33.476 10.898 - 10.949: 97.1113% ( 10) 00:09:33.476 10.949 - 11.001: 97.1608% ( 6) 00:09:33.476 11.001 - 11.052: 97.2103% ( 6) 00:09:33.476 11.052 - 11.104: 97.2681% ( 7) 00:09:33.476 11.155 - 11.206: 97.2763% ( 1) 00:09:33.476 11.206 - 11.258: 97.3093% ( 4) 00:09:33.476 11.258 - 11.309: 97.3506% ( 5) 00:09:33.476 11.309 - 11.361: 97.3671% ( 2) 00:09:33.476 11.361 - 11.412: 97.3754% ( 1) 00:09:33.476 11.412 - 11.463: 97.3836% ( 1) 00:09:33.476 11.463 - 11.515: 97.4249% ( 5) 00:09:33.476 11.515 - 11.566: 97.4331% ( 1) 00:09:33.476 11.566 - 11.618: 97.4497% ( 2) 00:09:33.476 11.618 - 11.669: 97.4827% ( 4) 00:09:33.476 11.669 - 11.720: 97.5157% ( 4) 00:09:33.476 11.720 - 11.772: 97.5404% ( 3) 00:09:33.476 11.772 - 11.823: 97.5652% ( 3) 00:09:33.476 11.823 - 11.875: 97.5735% ( 1) 00:09:33.476 11.875 - 11.926: 97.6065% ( 4) 00:09:33.476 11.926 - 11.978: 97.6560% ( 6) 00:09:33.476 12.029 - 12.080: 97.6725% ( 2) 00:09:33.476 12.080 - 12.132: 97.6973% ( 3) 00:09:33.476 12.132 - 12.183: 97.7138% ( 2) 00:09:33.476 12.183 - 12.235: 97.7550% ( 5) 00:09:33.476 12.235 - 12.286: 97.7880% ( 4) 00:09:33.476 12.286 - 12.337: 97.7963% ( 1) 00:09:33.476 12.389 - 12.440: 97.8046% ( 1) 00:09:33.476 12.440 - 12.492: 97.8376% ( 4) 00:09:33.476 12.492 - 12.543: 97.8541% ( 2) 00:09:33.476 12.543 - 12.594: 97.8623% ( 1) 00:09:33.476 12.594 - 12.646: 97.8706% ( 1) 00:09:33.476 12.646 - 12.697: 97.8788% ( 1) 00:09:33.476 12.697 - 12.749: 97.8871% ( 1) 00:09:33.476 12.749 - 12.800: 97.8953% ( 1) 00:09:33.476 12.851 - 12.903: 97.9036% ( 1) 00:09:33.476 12.903 - 12.954: 97.9366% ( 4) 00:09:33.476 12.954 - 13.006: 97.9531% ( 2) 00:09:33.476 13.006 - 13.057: 97.9779% ( 3) 00:09:33.476 13.057 - 13.108: 97.9861% ( 1) 00:09:33.476 13.108 - 13.160: 97.9944% ( 1) 00:09:33.476 13.160 - 13.263: 98.0026% ( 1) 00:09:33.476 13.263 - 13.365: 98.0191% ( 2) 00:09:33.476 13.468 - 13.571: 98.0522% ( 4) 00:09:33.476 13.674 - 13.777: 98.0604% ( 1) 00:09:33.476 13.777 - 13.880: 98.0687% ( 1) 00:09:33.476 13.880 - 13.982: 98.1099% ( 5) 00:09:33.476 14.291 - 14.394: 98.1182% ( 1) 00:09:33.476 14.394 - 14.496: 98.1264% ( 1) 00:09:33.476 14.496 - 14.599: 98.1347% ( 1) 00:09:33.476 14.702 - 14.805: 98.1430% ( 1) 00:09:33.476 14.805 - 14.908: 98.1512% ( 1) 00:09:33.476 14.908 - 15.010: 98.1595% ( 1) 00:09:33.476 15.010 - 15.113: 98.1760% ( 2) 00:09:33.476 15.216 - 15.319: 98.2007% ( 3) 00:09:33.476 15.627 - 15.730: 98.2255% ( 3) 00:09:33.476 15.730 - 15.833: 98.2502% ( 3) 00:09:33.476 15.833 - 15.936: 98.2998% ( 6) 00:09:33.476 15.936 - 16.039: 98.3245% ( 3) 00:09:33.476 16.039 - 16.141: 98.3410% ( 2) 00:09:33.476 16.141 - 16.244: 98.3575% ( 2) 00:09:33.476 16.244 - 16.347: 98.3741% ( 2) 00:09:33.476 16.347 - 16.450: 98.3906% ( 2) 00:09:33.476 16.450 - 16.553: 98.4566% ( 8) 00:09:33.476 16.553 - 16.655: 98.4896% ( 4) 00:09:33.476 16.655 - 16.758: 98.5391% ( 6) 00:09:33.476 16.758 - 16.861: 98.5969% ( 7) 00:09:33.476 16.861 - 16.964: 98.6134% ( 2) 00:09:33.476 16.964 - 17.067: 98.6299% ( 2) 00:09:33.476 17.067 - 17.169: 98.6547% ( 3) 00:09:33.476 17.169 - 17.272: 98.6794% ( 3) 00:09:33.476 17.272 - 17.375: 98.7042% ( 3) 00:09:33.476 17.375 - 17.478: 98.7207% ( 2) 00:09:33.476 17.478 - 17.581: 98.7372% ( 2) 00:09:33.476 17.581 - 17.684: 98.7537% ( 2) 00:09:33.476 17.684 - 17.786: 98.7950% ( 5) 00:09:33.476 17.786 - 17.889: 98.8115% ( 2) 00:09:33.476 17.889 - 17.992: 98.8610% ( 6) 00:09:33.476 17.992 - 18.095: 98.9105% ( 6) 00:09:33.476 18.095 - 18.198: 98.9683% ( 7) 00:09:33.476 18.198 - 18.300: 99.0178% ( 6) 00:09:33.476 18.300 - 18.403: 99.0756% ( 7) 00:09:33.476 18.403 - 18.506: 99.1499% ( 9) 00:09:33.476 18.506 - 18.609: 99.2324% ( 10) 00:09:33.476 18.609 - 18.712: 99.2654% ( 4) 00:09:33.476 18.712 - 18.814: 99.2902% ( 3) 00:09:33.476 18.814 - 18.917: 99.2984% ( 1) 00:09:33.476 18.917 - 19.020: 99.3232% ( 3) 00:09:33.476 19.020 - 19.123: 99.3562% ( 4) 00:09:33.476 19.123 - 19.226: 99.3975% ( 5) 00:09:33.476 19.226 - 19.329: 99.4223% ( 3) 00:09:33.476 19.329 - 19.431: 99.4470% ( 3) 00:09:33.476 19.431 - 19.534: 99.4635% ( 2) 00:09:33.476 19.534 - 19.637: 99.4800% ( 2) 00:09:33.476 19.637 - 19.740: 99.4965% ( 2) 00:09:33.476 19.740 - 19.843: 99.5130% ( 2) 00:09:33.476 19.843 - 19.945: 99.5543% ( 5) 00:09:33.476 19.945 - 20.048: 99.5791% ( 3) 00:09:33.476 20.048 - 20.151: 99.5873% ( 1) 00:09:33.476 20.151 - 20.254: 99.5956% ( 1) 00:09:33.476 20.254 - 20.357: 99.6038% ( 1) 00:09:33.476 20.357 - 20.459: 99.6121% ( 1) 00:09:33.476 20.459 - 20.562: 99.6451% ( 4) 00:09:33.476 20.562 - 20.665: 99.6534% ( 1) 00:09:33.476 20.768 - 20.871: 99.6616% ( 1) 00:09:33.476 20.871 - 20.973: 99.6781% ( 2) 00:09:33.476 20.973 - 21.076: 99.6864% ( 1) 00:09:33.476 21.076 - 21.179: 99.7029% ( 2) 00:09:33.476 21.179 - 21.282: 99.7111% ( 1) 00:09:33.476 21.282 - 21.385: 99.7194% ( 1) 00:09:33.476 21.590 - 21.693: 99.7359% ( 2) 00:09:33.476 21.693 - 21.796: 99.7524% ( 2) 00:09:33.476 21.796 - 21.899: 99.7937% ( 5) 00:09:33.476 22.207 - 22.310: 99.8019% ( 1) 00:09:33.476 22.413 - 22.516: 99.8102% ( 1) 00:09:33.476 22.618 - 22.721: 99.8184% ( 1) 00:09:33.476 22.721 - 22.824: 99.8267% ( 1) 00:09:33.476 22.824 - 22.927: 99.8514% ( 3) 00:09:33.476 23.235 - 23.338: 99.8597% ( 1) 00:09:33.476 23.647 - 23.749: 99.8679% ( 1) 00:09:33.476 23.955 - 24.058: 99.8762% ( 1) 00:09:33.476 24.263 - 24.366: 99.8845% ( 1) 00:09:33.476 24.880 - 24.983: 99.8927% ( 1) 00:09:33.476 25.292 - 25.394: 99.9010% ( 1) 00:09:33.476 28.787 - 28.993: 99.9092% ( 1) 00:09:33.476 30.638 - 30.843: 99.9175% ( 1) 00:09:33.476 32.077 - 32.283: 99.9257% ( 1) 00:09:33.476 33.516 - 33.722: 99.9340% ( 1) 00:09:33.476 35.573 - 35.778: 99.9422% ( 1) 00:09:33.476 40.508 - 40.713: 99.9505% ( 1) 00:09:33.476 42.564 - 42.769: 99.9587% ( 1) 00:09:33.476 46.676 - 46.882: 99.9670% ( 1) 00:09:33.476 56.752 - 57.163: 99.9752% ( 1) 00:09:33.476 61.276 - 61.687: 99.9835% ( 1) 00:09:33.476 104.456 - 104.867: 99.9917% ( 1) 00:09:33.476 129.131 - 129.953: 100.0000% ( 1) 00:09:33.476 00:09:33.476 00:09:33.476 real 0m1.325s 00:09:33.476 user 0m1.095s 00:09:33.476 sys 0m0.180s 00:09:33.476 15:09:31 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.477 15:09:31 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:33.477 ************************************ 00:09:33.477 END TEST nvme_overhead 00:09:33.477 ************************************ 00:09:33.477 15:09:31 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:33.477 15:09:31 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:33.477 15:09:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.477 15:09:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.477 ************************************ 00:09:33.477 START TEST nvme_arbitration 00:09:33.477 ************************************ 00:09:33.477 15:09:31 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:36.763 Initializing NVMe Controllers 00:09:36.763 Attached to 0000:00:10.0 00:09:36.763 Attached to 0000:00:11.0 00:09:36.763 Attached to 0000:00:13.0 00:09:36.763 Attached to 0000:00:12.0 00:09:36.763 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:36.763 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:36.763 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:36.763 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:36.763 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:36.763 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:36.763 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:36.763 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:36.763 Initialization complete. Launching workers. 00:09:36.763 Starting thread on core 1 with urgent priority queue 00:09:36.763 Starting thread on core 2 with urgent priority queue 00:09:36.763 Starting thread on core 3 with urgent priority queue 00:09:36.763 Starting thread on core 0 with urgent priority queue 00:09:36.763 QEMU NVMe Ctrl (12340 ) core 0: 3434.67 IO/s 29.11 secs/100000 ios 00:09:36.763 QEMU NVMe Ctrl (12342 ) core 0: 3434.67 IO/s 29.11 secs/100000 ios 00:09:36.763 QEMU NVMe Ctrl (12341 ) core 1: 3264.00 IO/s 30.64 secs/100000 ios 00:09:36.763 QEMU NVMe Ctrl (12342 ) core 1: 3264.00 IO/s 30.64 secs/100000 ios 00:09:36.763 QEMU NVMe Ctrl (12343 ) core 2: 3434.67 IO/s 29.11 secs/100000 ios 00:09:36.763 QEMU NVMe Ctrl (12342 ) core 3: 3413.33 IO/s 29.30 secs/100000 ios 00:09:36.763 ======================================================== 00:09:36.763 00:09:36.763 00:09:36.763 real 0m3.312s 00:09:36.763 user 0m9.033s 00:09:36.763 sys 0m0.174s 00:09:36.763 ************************************ 00:09:36.763 END TEST nvme_arbitration 00:09:36.763 ************************************ 00:09:36.763 15:09:35 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.763 15:09:35 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:36.763 15:09:35 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:36.763 15:09:35 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:36.763 15:09:35 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.763 15:09:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.763 ************************************ 00:09:36.763 START TEST nvme_single_aen 00:09:36.763 ************************************ 00:09:36.763 15:09:35 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:37.021 Asynchronous Event Request test 00:09:37.021 Attached to 0000:00:10.0 00:09:37.021 Attached to 0000:00:11.0 00:09:37.021 Attached to 0000:00:13.0 00:09:37.021 Attached to 0000:00:12.0 00:09:37.021 Reset controller to setup AER completions for this process 00:09:37.021 Registering asynchronous event callbacks... 00:09:37.021 Getting orig temperature thresholds of all controllers 00:09:37.021 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:37.021 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:37.021 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:37.021 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:37.021 Setting all controllers temperature threshold low to trigger AER 00:09:37.021 Waiting for all controllers temperature threshold to be set lower 00:09:37.021 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:37.021 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:37.021 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:37.021 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:37.021 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:37.021 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:37.021 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:37.021 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:37.021 Waiting for all controllers to trigger AER and reset threshold 00:09:37.021 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:37.021 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:37.021 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:37.021 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:37.021 Cleaning up... 00:09:37.021 ************************************ 00:09:37.021 END TEST nvme_single_aen 00:09:37.021 ************************************ 00:09:37.021 00:09:37.021 real 0m0.329s 00:09:37.021 user 0m0.124s 00:09:37.021 sys 0m0.162s 00:09:37.021 15:09:35 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.021 15:09:35 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:37.278 15:09:35 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:37.278 15:09:35 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:37.278 15:09:35 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.278 15:09:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:37.278 ************************************ 00:09:37.278 START TEST nvme_doorbell_aers 00:09:37.278 ************************************ 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:37.278 15:09:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:37.535 [2024-10-01 15:09:35.953352] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:47.498 Executing: test_write_invalid_db 00:09:47.498 Waiting for AER completion... 00:09:47.498 Failure: test_write_invalid_db 00:09:47.498 00:09:47.498 Executing: test_invalid_db_write_overflow_sq 00:09:47.498 Waiting for AER completion... 00:09:47.498 Failure: test_invalid_db_write_overflow_sq 00:09:47.498 00:09:47.498 Executing: test_invalid_db_write_overflow_cq 00:09:47.498 Waiting for AER completion... 00:09:47.498 Failure: test_invalid_db_write_overflow_cq 00:09:47.498 00:09:47.498 15:09:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:47.498 15:09:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:47.498 [2024-10-01 15:09:46.040746] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:57.506 Executing: test_write_invalid_db 00:09:57.506 Waiting for AER completion... 00:09:57.506 Failure: test_write_invalid_db 00:09:57.506 00:09:57.506 Executing: test_invalid_db_write_overflow_sq 00:09:57.506 Waiting for AER completion... 00:09:57.506 Failure: test_invalid_db_write_overflow_sq 00:09:57.506 00:09:57.506 Executing: test_invalid_db_write_overflow_cq 00:09:57.506 Waiting for AER completion... 00:09:57.506 Failure: test_invalid_db_write_overflow_cq 00:09:57.506 00:09:57.506 15:09:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:57.506 15:09:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:57.765 [2024-10-01 15:09:56.071623] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:07.752 Executing: test_write_invalid_db 00:10:07.752 Waiting for AER completion... 00:10:07.752 Failure: test_write_invalid_db 00:10:07.752 00:10:07.752 Executing: test_invalid_db_write_overflow_sq 00:10:07.752 Waiting for AER completion... 00:10:07.752 Failure: test_invalid_db_write_overflow_sq 00:10:07.752 00:10:07.752 Executing: test_invalid_db_write_overflow_cq 00:10:07.752 Waiting for AER completion... 00:10:07.753 Failure: test_invalid_db_write_overflow_cq 00:10:07.753 00:10:07.753 15:10:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:07.753 15:10:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:07.753 [2024-10-01 15:10:06.128586] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 Executing: test_write_invalid_db 00:10:17.723 Waiting for AER completion... 00:10:17.723 Failure: test_write_invalid_db 00:10:17.723 00:10:17.723 Executing: test_invalid_db_write_overflow_sq 00:10:17.723 Waiting for AER completion... 00:10:17.723 Failure: test_invalid_db_write_overflow_sq 00:10:17.723 00:10:17.723 Executing: test_invalid_db_write_overflow_cq 00:10:17.723 Waiting for AER completion... 00:10:17.723 Failure: test_invalid_db_write_overflow_cq 00:10:17.723 00:10:17.723 00:10:17.723 real 0m40.319s 00:10:17.723 user 0m28.455s 00:10:17.723 sys 0m11.465s 00:10:17.723 15:10:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.723 15:10:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:17.723 ************************************ 00:10:17.723 END TEST nvme_doorbell_aers 00:10:17.723 ************************************ 00:10:17.723 15:10:15 nvme -- nvme/nvme.sh@97 -- # uname 00:10:17.723 15:10:15 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:17.723 15:10:15 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:17.723 15:10:15 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:17.723 15:10:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.723 15:10:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.723 ************************************ 00:10:17.723 START TEST nvme_multi_aen 00:10:17.723 ************************************ 00:10:17.723 15:10:15 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:17.723 [2024-10-01 15:10:16.209061] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.209158] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.209188] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.210965] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.211019] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.211034] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.212544] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.212585] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.212601] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.214080] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.214121] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 [2024-10-01 15:10:16.214136] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:10:17.723 Child process pid: 77017 00:10:18.027 [Child] Asynchronous Event Request test 00:10:18.027 [Child] Attached to 0000:00:10.0 00:10:18.027 [Child] Attached to 0000:00:11.0 00:10:18.027 [Child] Attached to 0000:00:13.0 00:10:18.027 [Child] Attached to 0000:00:12.0 00:10:18.027 [Child] Registering asynchronous event callbacks... 00:10:18.027 [Child] Getting orig temperature thresholds of all controllers 00:10:18.027 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.027 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.027 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.027 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.027 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:18.027 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.027 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.027 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.027 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.027 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.027 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.027 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.027 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.027 [Child] Cleaning up... 00:10:18.027 Asynchronous Event Request test 00:10:18.027 Attached to 0000:00:10.0 00:10:18.027 Attached to 0000:00:11.0 00:10:18.027 Attached to 0000:00:13.0 00:10:18.027 Attached to 0000:00:12.0 00:10:18.027 Reset controller to setup AER completions for this process 00:10:18.027 Registering asynchronous event callbacks... 00:10:18.027 Getting orig temperature thresholds of all controllers 00:10:18.027 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.027 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.027 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.027 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:18.027 Setting all controllers temperature threshold low to trigger AER 00:10:18.027 Waiting for all controllers temperature threshold to be set lower 00:10:18.027 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.027 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:18.027 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.027 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:18.027 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.027 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:18.027 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:18.027 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:18.027 Waiting for all controllers to trigger AER and reset threshold 00:10:18.027 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.027 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.027 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.027 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.027 Cleaning up... 00:10:18.286 00:10:18.286 real 0m0.608s 00:10:18.286 user 0m0.198s 00:10:18.286 sys 0m0.297s 00:10:18.286 15:10:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.286 15:10:16 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:18.286 ************************************ 00:10:18.286 END TEST nvme_multi_aen 00:10:18.286 ************************************ 00:10:18.286 15:10:16 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:18.286 15:10:16 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:18.286 15:10:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.286 15:10:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.286 ************************************ 00:10:18.286 START TEST nvme_startup 00:10:18.287 ************************************ 00:10:18.287 15:10:16 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:18.545 Initializing NVMe Controllers 00:10:18.545 Attached to 0000:00:10.0 00:10:18.545 Attached to 0000:00:11.0 00:10:18.545 Attached to 0000:00:13.0 00:10:18.545 Attached to 0000:00:12.0 00:10:18.545 Initialization complete. 00:10:18.545 Time used:171997.203 (us). 00:10:18.545 ************************************ 00:10:18.545 END TEST nvme_startup 00:10:18.545 ************************************ 00:10:18.545 00:10:18.545 real 0m0.259s 00:10:18.545 user 0m0.092s 00:10:18.545 sys 0m0.125s 00:10:18.545 15:10:16 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.545 15:10:16 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:18.545 15:10:16 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:18.545 15:10:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:18.545 15:10:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.545 15:10:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.545 ************************************ 00:10:18.545 START TEST nvme_multi_secondary 00:10:18.545 ************************************ 00:10:18.545 15:10:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:10:18.545 15:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77073 00:10:18.545 15:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:18.545 15:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77074 00:10:18.545 15:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:18.545 15:10:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:21.828 Initializing NVMe Controllers 00:10:21.828 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:21.828 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:21.828 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:21.828 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:21.828 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:21.828 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:21.828 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:21.828 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:21.828 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:21.828 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:21.828 Initialization complete. Launching workers. 00:10:21.828 ======================================================== 00:10:21.828 Latency(us) 00:10:21.828 Device Information : IOPS MiB/s Average min max 00:10:21.828 PCIE (0000:00:10.0) NSID 1 from core 1: 4854.92 18.96 3293.06 1621.59 9641.17 00:10:21.828 PCIE (0000:00:11.0) NSID 1 from core 1: 4854.92 18.96 3295.10 1621.30 9026.98 00:10:21.828 PCIE (0000:00:13.0) NSID 1 from core 1: 4854.92 18.96 3295.10 1568.83 8764.60 00:10:21.829 PCIE (0000:00:12.0) NSID 1 from core 1: 4854.92 18.96 3295.16 1563.17 9785.49 00:10:21.829 PCIE (0000:00:12.0) NSID 2 from core 1: 4854.92 18.96 3295.14 1401.69 9217.84 00:10:21.829 PCIE (0000:00:12.0) NSID 3 from core 1: 4854.92 18.96 3295.19 1195.63 9105.19 00:10:21.829 ======================================================== 00:10:21.829 Total : 29129.52 113.79 3294.79 1195.63 9785.49 00:10:21.829 00:10:22.087 Initializing NVMe Controllers 00:10:22.087 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:22.087 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:22.087 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:22.087 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:22.087 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:22.087 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:22.087 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:22.087 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:22.087 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:22.087 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:22.087 Initialization complete. Launching workers. 00:10:22.087 ======================================================== 00:10:22.087 Latency(us) 00:10:22.087 Device Information : IOPS MiB/s Average min max 00:10:22.087 PCIE (0000:00:10.0) NSID 1 from core 2: 3255.23 12.72 4912.44 1157.14 15746.49 00:10:22.087 PCIE (0000:00:11.0) NSID 1 from core 2: 3255.23 12.72 4913.78 1119.71 14871.37 00:10:22.087 PCIE (0000:00:13.0) NSID 1 from core 2: 3255.23 12.72 4912.76 1240.15 18882.32 00:10:22.087 PCIE (0000:00:12.0) NSID 1 from core 2: 3255.23 12.72 4908.58 1282.05 14008.44 00:10:22.087 PCIE (0000:00:12.0) NSID 2 from core 2: 3255.23 12.72 4905.20 1236.57 13811.68 00:10:22.087 PCIE (0000:00:12.0) NSID 3 from core 2: 3255.23 12.72 4904.68 1134.11 14132.01 00:10:22.087 ======================================================== 00:10:22.087 Total : 19531.37 76.29 4909.57 1119.71 18882.32 00:10:22.087 00:10:22.087 15:10:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77073 00:10:23.990 Initializing NVMe Controllers 00:10:23.990 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:23.990 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:23.990 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:23.990 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:23.990 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:23.990 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:23.990 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:23.990 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:23.990 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:23.990 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:23.990 Initialization complete. Launching workers. 00:10:23.990 ======================================================== 00:10:23.990 Latency(us) 00:10:23.990 Device Information : IOPS MiB/s Average min max 00:10:23.990 PCIE (0000:00:10.0) NSID 1 from core 0: 7990.17 31.21 2000.86 881.25 9662.84 00:10:23.990 PCIE (0000:00:11.0) NSID 1 from core 0: 7990.17 31.21 2002.01 923.92 9246.33 00:10:23.990 PCIE (0000:00:13.0) NSID 1 from core 0: 7990.17 31.21 2001.99 775.47 8635.44 00:10:23.990 PCIE (0000:00:12.0) NSID 1 from core 0: 7990.17 31.21 2001.96 658.54 9127.21 00:10:23.990 PCIE (0000:00:12.0) NSID 2 from core 0: 7990.17 31.21 2001.94 549.00 9496.34 00:10:23.990 PCIE (0000:00:12.0) NSID 3 from core 0: 7990.17 31.21 2001.90 459.21 9238.74 00:10:23.990 ======================================================== 00:10:23.990 Total : 47940.99 187.27 2001.78 459.21 9662.84 00:10:23.990 00:10:23.990 15:10:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77074 00:10:23.990 15:10:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77143 00:10:23.990 15:10:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:23.990 15:10:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77144 00:10:23.990 15:10:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:23.990 15:10:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:27.274 Initializing NVMe Controllers 00:10:27.274 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:27.274 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:27.274 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:27.274 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:27.274 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:27.274 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:27.274 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:27.275 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:27.275 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:27.275 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:27.275 Initialization complete. Launching workers. 00:10:27.275 ======================================================== 00:10:27.275 Latency(us) 00:10:27.275 Device Information : IOPS MiB/s Average min max 00:10:27.275 PCIE (0000:00:10.0) NSID 1 from core 1: 5170.59 20.20 3092.01 912.06 9284.34 00:10:27.275 PCIE (0000:00:11.0) NSID 1 from core 1: 5170.59 20.20 3093.78 954.78 7936.55 00:10:27.275 PCIE (0000:00:13.0) NSID 1 from core 1: 5170.59 20.20 3094.05 944.72 9010.15 00:10:27.275 PCIE (0000:00:12.0) NSID 1 from core 1: 5170.59 20.20 3093.82 951.41 9037.61 00:10:27.275 PCIE (0000:00:12.0) NSID 2 from core 1: 5170.59 20.20 3093.92 955.81 9177.10 00:10:27.275 PCIE (0000:00:12.0) NSID 3 from core 1: 5170.59 20.20 3093.94 946.07 9794.76 00:10:27.275 ======================================================== 00:10:27.275 Total : 31023.55 121.19 3093.59 912.06 9794.76 00:10:27.275 00:10:27.275 Initializing NVMe Controllers 00:10:27.275 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:27.275 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:27.275 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:27.275 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:27.275 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:27.275 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:27.275 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:27.275 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:27.275 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:27.275 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:27.275 Initialization complete. Launching workers. 00:10:27.275 ======================================================== 00:10:27.275 Latency(us) 00:10:27.275 Device Information : IOPS MiB/s Average min max 00:10:27.275 PCIE (0000:00:10.0) NSID 1 from core 0: 4750.36 18.56 3365.42 1009.92 9026.96 00:10:27.275 PCIE (0000:00:11.0) NSID 1 from core 0: 4750.36 18.56 3367.40 1056.08 8206.01 00:10:27.275 PCIE (0000:00:13.0) NSID 1 from core 0: 4750.36 18.56 3367.34 1058.62 9040.86 00:10:27.275 PCIE (0000:00:12.0) NSID 1 from core 0: 4750.36 18.56 3367.27 965.57 9770.12 00:10:27.275 PCIE (0000:00:12.0) NSID 2 from core 0: 4750.36 18.56 3367.24 904.41 9193.67 00:10:27.275 PCIE (0000:00:12.0) NSID 3 from core 0: 4750.36 18.56 3367.14 753.40 8878.89 00:10:27.275 ======================================================== 00:10:27.275 Total : 28502.19 111.34 3366.97 753.40 9770.12 00:10:27.275 00:10:29.178 Initializing NVMe Controllers 00:10:29.178 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:29.178 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:29.178 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:29.178 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:29.178 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:29.178 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:29.178 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:29.178 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:29.178 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:29.178 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:29.178 Initialization complete. Launching workers. 00:10:29.178 ======================================================== 00:10:29.178 Latency(us) 00:10:29.178 Device Information : IOPS MiB/s Average min max 00:10:29.178 PCIE (0000:00:10.0) NSID 1 from core 2: 3423.86 13.37 4670.29 966.00 16639.03 00:10:29.178 PCIE (0000:00:11.0) NSID 1 from core 2: 3423.86 13.37 4672.77 976.72 18247.43 00:10:29.178 PCIE (0000:00:13.0) NSID 1 from core 2: 3423.86 13.37 4672.67 977.15 17452.99 00:10:29.178 PCIE (0000:00:12.0) NSID 1 from core 2: 3423.86 13.37 4672.32 972.39 17438.61 00:10:29.178 PCIE (0000:00:12.0) NSID 2 from core 2: 3423.86 13.37 4672.19 716.97 17296.54 00:10:29.178 PCIE (0000:00:12.0) NSID 3 from core 2: 3423.86 13.37 4671.66 652.91 18224.63 00:10:29.178 ======================================================== 00:10:29.178 Total : 20543.14 80.25 4671.98 652.91 18247.43 00:10:29.178 00:10:29.436 ************************************ 00:10:29.436 END TEST nvme_multi_secondary 00:10:29.436 ************************************ 00:10:29.436 15:10:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77143 00:10:29.436 15:10:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77144 00:10:29.436 00:10:29.436 real 0m10.812s 00:10:29.436 user 0m18.382s 00:10:29.436 sys 0m1.018s 00:10:29.437 15:10:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.437 15:10:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:29.437 15:10:27 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:29.437 15:10:27 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:29.437 15:10:27 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/76082 ]] 00:10:29.437 15:10:27 nvme -- common/autotest_common.sh@1090 -- # kill 76082 00:10:29.437 15:10:27 nvme -- common/autotest_common.sh@1091 -- # wait 76082 00:10:29.437 [2024-10-01 15:10:27.843796] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.844497] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.844997] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.845527] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.847153] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.847283] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.847342] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.847394] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.848565] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.848665] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.848711] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.848768] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.849896] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.849996] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.850041] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 [2024-10-01 15:10:27.850091] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77016) is not found. Dropping the request. 00:10:29.437 15:10:27 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:10:29.437 15:10:27 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:10:29.437 15:10:27 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:29.437 15:10:27 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:29.437 15:10:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.437 15:10:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:29.696 ************************************ 00:10:29.696 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:29.696 ************************************ 00:10:29.696 15:10:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:29.696 * Looking for test storage... 00:10:29.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:29.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.696 --rc genhtml_branch_coverage=1 00:10:29.696 --rc genhtml_function_coverage=1 00:10:29.696 --rc genhtml_legend=1 00:10:29.696 --rc geninfo_all_blocks=1 00:10:29.696 --rc geninfo_unexecuted_blocks=1 00:10:29.696 00:10:29.696 ' 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:29.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.696 --rc genhtml_branch_coverage=1 00:10:29.696 --rc genhtml_function_coverage=1 00:10:29.696 --rc genhtml_legend=1 00:10:29.696 --rc geninfo_all_blocks=1 00:10:29.696 --rc geninfo_unexecuted_blocks=1 00:10:29.696 00:10:29.696 ' 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:29.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.696 --rc genhtml_branch_coverage=1 00:10:29.696 --rc genhtml_function_coverage=1 00:10:29.696 --rc genhtml_legend=1 00:10:29.696 --rc geninfo_all_blocks=1 00:10:29.696 --rc geninfo_unexecuted_blocks=1 00:10:29.696 00:10:29.696 ' 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:29.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.696 --rc genhtml_branch_coverage=1 00:10:29.696 --rc genhtml_function_coverage=1 00:10:29.696 --rc genhtml_legend=1 00:10:29.696 --rc geninfo_all_blocks=1 00:10:29.696 --rc geninfo_unexecuted_blocks=1 00:10:29.696 00:10:29.696 ' 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:29.696 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:29.697 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:29.697 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:10:29.697 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:29.697 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:29.697 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:29.697 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:10:29.697 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:29.697 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:29.697 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:29.954 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:29.954 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:29.954 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77311 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77311 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 77311 ']' 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.955 15:10:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:29.955 [2024-10-01 15:10:28.441087] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:10:29.955 [2024-10-01 15:10:28.441243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77311 ] 00:10:30.213 [2024-10-01 15:10:28.620831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.213 [2024-10-01 15:10:28.673624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.213 [2024-10-01 15:10:28.673854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.213 [2024-10-01 15:10:28.673960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.213 [2024-10-01 15:10:28.674125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.779 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.779 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:10:30.779 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:30.779 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.779 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:30.779 nvme0n1 00:10:30.779 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.779 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_HmMFx.txt 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:31.083 true 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727795429 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77333 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:31.083 15:10:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:32.989 [2024-10-01 15:10:31.356971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:32.989 [2024-10-01 15:10:31.357529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:32.989 [2024-10-01 15:10:31.357564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:32.989 [2024-10-01 15:10:31.357584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.989 [2024-10-01 15:10:31.360251] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.989 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77333 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77333 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77333 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_HmMFx.txt 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:32.989 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_HmMFx.txt 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77311 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 77311 ']' 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 77311 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77311 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77311' 00:10:32.990 killing process with pid 77311 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 77311 00:10:32.990 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 77311 00:10:33.558 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:33.558 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:33.558 00:10:33.558 real 0m3.954s 00:10:33.558 user 0m13.221s 00:10:33.558 sys 0m0.793s 00:10:33.558 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.558 ************************************ 00:10:33.558 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:33.558 ************************************ 00:10:33.558 15:10:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:33.558 15:10:32 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:33.558 15:10:32 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:33.558 15:10:32 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:33.558 15:10:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.558 15:10:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:33.558 ************************************ 00:10:33.558 START TEST nvme_fio 00:10:33.558 ************************************ 00:10:33.558 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:10:33.558 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:33.558 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:33.558 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:33.558 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:33.558 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:10:33.558 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:33.558 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:33.558 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:33.817 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:33.817 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:33.817 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:33.817 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:33.817 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:33.817 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:33.817 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:34.077 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:34.077 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:34.390 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:34.390 15:10:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:34.390 15:10:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:34.666 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:34.666 fio-3.35 00:10:34.666 Starting 1 thread 00:10:37.978 00:10:37.978 test: (groupid=0, jobs=1): err= 0: pid=77457: Tue Oct 1 15:10:36 2024 00:10:37.978 read: IOPS=22.1k, BW=86.3MiB/s (90.5MB/s)(173MiB/2001msec) 00:10:37.978 slat (usec): min=3, max=116, avg= 4.78, stdev= 1.34 00:10:37.978 clat (usec): min=189, max=12684, avg=2894.39, stdev=397.75 00:10:37.978 lat (usec): min=193, max=12800, avg=2899.17, stdev=398.28 00:10:37.978 clat percentiles (usec): 00:10:37.978 | 1.00th=[ 2442], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:10:37.978 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2835], 60.00th=[ 2868], 00:10:37.978 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 3032], 95.00th=[ 3392], 00:10:37.978 | 99.00th=[ 3851], 99.50th=[ 5276], 99.90th=[ 7767], 99.95th=[10028], 00:10:37.978 | 99.99th=[12387] 00:10:37.978 bw ( KiB/s): min=84712, max=89120, per=98.69%, avg=87200.00, stdev=2258.23, samples=3 00:10:37.978 iops : min=21178, max=22280, avg=21800.00, stdev=564.56, samples=3 00:10:37.978 write: IOPS=21.9k, BW=85.7MiB/s (89.9MB/s)(172MiB/2001msec); 0 zone resets 00:10:37.978 slat (usec): min=4, max=228, avg= 4.98, stdev= 1.69 00:10:37.978 clat (usec): min=207, max=12423, avg=2899.46, stdev=405.58 00:10:37.978 lat (usec): min=212, max=12437, avg=2904.44, stdev=406.12 00:10:37.978 clat percentiles (usec): 00:10:37.978 | 1.00th=[ 2442], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:10:37.978 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2835], 60.00th=[ 2868], 00:10:37.978 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 3032], 95.00th=[ 3392], 00:10:37.978 | 99.00th=[ 3884], 99.50th=[ 5407], 99.90th=[ 8356], 99.95th=[10421], 00:10:37.978 | 99.99th=[12125] 00:10:37.978 bw ( KiB/s): min=84560, max=89912, per=99.54%, avg=87370.67, stdev=2686.15, samples=3 00:10:37.978 iops : min=21140, max=22478, avg=21842.67, stdev=671.54, samples=3 00:10:37.978 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:37.978 lat (msec) : 2=0.36%, 4=98.73%, 10=0.82%, 20=0.05% 00:10:37.979 cpu : usr=99.30%, sys=0.10%, ctx=7, majf=0, minf=628 00:10:37.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:37.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.979 issued rwts: total=44199,43907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.979 00:10:37.979 Run status group 0 (all jobs): 00:10:37.979 READ: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=173MiB (181MB), run=2001-2001msec 00:10:37.979 WRITE: bw=85.7MiB/s (89.9MB/s), 85.7MiB/s-85.7MiB/s (89.9MB/s-89.9MB/s), io=172MiB (180MB), run=2001-2001msec 00:10:38.238 ----------------------------------------------------- 00:10:38.238 Suppressions used: 00:10:38.238 count bytes template 00:10:38.238 1 32 /usr/src/fio/parse.c 00:10:38.238 1 8 libtcmalloc_minimal.so 00:10:38.238 ----------------------------------------------------- 00:10:38.238 00:10:38.238 15:10:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:38.238 15:10:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:38.238 15:10:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:38.238 15:10:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:38.497 15:10:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:38.497 15:10:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:38.756 15:10:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:38.756 15:10:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:38.756 15:10:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:39.015 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:39.015 fio-3.35 00:10:39.015 Starting 1 thread 00:10:43.199 00:10:43.199 test: (groupid=0, jobs=1): err= 0: pid=77523: Tue Oct 1 15:10:41 2024 00:10:43.199 read: IOPS=22.1k, BW=86.4MiB/s (90.6MB/s)(173MiB/2001msec) 00:10:43.199 slat (nsec): min=3895, max=77055, avg=4777.52, stdev=1150.57 00:10:43.199 clat (usec): min=214, max=14143, avg=2891.59, stdev=375.01 00:10:43.199 lat (usec): min=219, max=14220, avg=2896.37, stdev=375.49 00:10:43.199 clat percentiles (usec): 00:10:43.199 | 1.00th=[ 2507], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:10:43.199 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:10:43.199 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3130], 00:10:43.199 | 99.00th=[ 3884], 99.50th=[ 4621], 99.90th=[ 8160], 99.95th=[10421], 00:10:43.199 | 99.99th=[13829] 00:10:43.199 bw ( KiB/s): min=84512, max=90504, per=99.81%, avg=88285.33, stdev=3284.62, samples=3 00:10:43.199 iops : min=21128, max=22626, avg=22071.33, stdev=821.16, samples=3 00:10:43.199 write: IOPS=22.0k, BW=85.8MiB/s (90.0MB/s)(172MiB/2001msec); 0 zone resets 00:10:43.199 slat (nsec): min=4055, max=44510, avg=4920.42, stdev=1112.12 00:10:43.199 clat (usec): min=230, max=13960, avg=2896.19, stdev=384.68 00:10:43.199 lat (usec): min=234, max=13974, avg=2901.11, stdev=385.15 00:10:43.199 clat percentiles (usec): 00:10:43.199 | 1.00th=[ 2474], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:10:43.199 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:10:43.199 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3163], 00:10:43.199 | 99.00th=[ 3916], 99.50th=[ 4752], 99.90th=[ 8455], 99.95th=[11076], 00:10:43.199 | 99.99th=[13435] 00:10:43.199 bw ( KiB/s): min=84416, max=91248, per=100.00%, avg=88421.33, stdev=3565.25, samples=3 00:10:43.199 iops : min=21104, max=22812, avg=22105.33, stdev=891.31, samples=3 00:10:43.199 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:43.199 lat (msec) : 2=0.35%, 4=98.89%, 10=0.66%, 20=0.06% 00:10:43.199 cpu : usr=99.40%, sys=0.05%, ctx=3, majf=0, minf=627 00:10:43.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:43.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.199 issued rwts: total=44247,43959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.199 00:10:43.199 Run status group 0 (all jobs): 00:10:43.199 READ: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=173MiB (181MB), run=2001-2001msec 00:10:43.199 WRITE: bw=85.8MiB/s (90.0MB/s), 85.8MiB/s-85.8MiB/s (90.0MB/s-90.0MB/s), io=172MiB (180MB), run=2001-2001msec 00:10:43.199 ----------------------------------------------------- 00:10:43.199 Suppressions used: 00:10:43.199 count bytes template 00:10:43.199 1 32 /usr/src/fio/parse.c 00:10:43.199 1 8 libtcmalloc_minimal.so 00:10:43.199 ----------------------------------------------------- 00:10:43.199 00:10:43.199 15:10:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:43.199 15:10:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:43.199 15:10:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:43.199 15:10:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:43.199 15:10:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:43.199 15:10:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:43.456 15:10:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:43.456 15:10:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:43.456 15:10:41 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:43.456 15:10:41 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:43.456 15:10:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:43.456 15:10:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:43.456 15:10:41 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:43.456 15:10:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:43.456 15:10:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:43.456 15:10:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:43.456 15:10:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:43.456 15:10:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:43.456 15:10:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:43.715 15:10:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:43.715 15:10:42 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:43.715 15:10:42 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:43.715 15:10:42 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:43.715 15:10:42 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:43.715 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:43.715 fio-3.35 00:10:43.715 Starting 1 thread 00:10:47.900 00:10:47.900 test: (groupid=0, jobs=1): err= 0: pid=77589: Tue Oct 1 15:10:45 2024 00:10:47.900 read: IOPS=22.0k, BW=85.8MiB/s (90.0MB/s)(172MiB/2001msec) 00:10:47.900 slat (nsec): min=3768, max=87674, avg=4747.49, stdev=1262.79 00:10:47.900 clat (usec): min=251, max=16155, avg=2911.57, stdev=492.24 00:10:47.900 lat (usec): min=256, max=16243, avg=2916.32, stdev=492.72 00:10:47.900 clat percentiles (usec): 00:10:47.900 | 1.00th=[ 2040], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:10:47.900 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:10:47.900 | 70.00th=[ 2900], 80.00th=[ 2999], 90.00th=[ 3294], 95.00th=[ 3425], 00:10:47.900 | 99.00th=[ 4490], 99.50th=[ 5342], 99.90th=[ 8979], 99.95th=[12911], 00:10:47.900 | 99.99th=[15926] 00:10:47.900 bw ( KiB/s): min=87456, max=88840, per=100.00%, avg=87933.33, stdev=785.56, samples=3 00:10:47.900 iops : min=21864, max=22210, avg=21983.33, stdev=196.39, samples=3 00:10:47.900 write: IOPS=21.8k, BW=85.3MiB/s (89.4MB/s)(171MiB/2001msec); 0 zone resets 00:10:47.900 slat (nsec): min=3883, max=40005, avg=4911.55, stdev=1235.45 00:10:47.900 clat (usec): min=387, max=15972, avg=2912.81, stdev=498.01 00:10:47.900 lat (usec): min=392, max=15986, avg=2917.72, stdev=498.48 00:10:47.900 clat percentiles (usec): 00:10:47.900 | 1.00th=[ 2073], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2737], 00:10:47.900 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:10:47.900 | 70.00th=[ 2900], 80.00th=[ 2999], 90.00th=[ 3294], 95.00th=[ 3425], 00:10:47.900 | 99.00th=[ 4424], 99.50th=[ 5342], 99.90th=[10290], 99.95th=[13173], 00:10:47.900 | 99.99th=[15533] 00:10:47.900 bw ( KiB/s): min=87488, max=88632, per=100.00%, avg=88125.33, stdev=583.09, samples=3 00:10:47.900 iops : min=21872, max=22158, avg=22031.33, stdev=145.77, samples=3 00:10:47.900 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.03% 00:10:47.900 lat (msec) : 2=0.82%, 4=97.84%, 10=1.18%, 20=0.09% 00:10:47.900 cpu : usr=99.20%, sys=0.20%, ctx=5, majf=0, minf=628 00:10:47.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:47.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.900 issued rwts: total=43973,43685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.900 00:10:47.900 Run status group 0 (all jobs): 00:10:47.900 READ: bw=85.8MiB/s (90.0MB/s), 85.8MiB/s-85.8MiB/s (90.0MB/s-90.0MB/s), io=172MiB (180MB), run=2001-2001msec 00:10:47.900 WRITE: bw=85.3MiB/s (89.4MB/s), 85.3MiB/s-85.3MiB/s (89.4MB/s-89.4MB/s), io=171MiB (179MB), run=2001-2001msec 00:10:47.900 ----------------------------------------------------- 00:10:47.900 Suppressions used: 00:10:47.900 count bytes template 00:10:47.900 1 32 /usr/src/fio/parse.c 00:10:47.900 1 8 libtcmalloc_minimal.so 00:10:47.900 ----------------------------------------------------- 00:10:47.900 00:10:47.900 15:10:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:47.900 15:10:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:47.900 15:10:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:47.900 15:10:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:47.900 15:10:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:47.900 15:10:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:48.158 15:10:46 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:48.158 15:10:46 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:48.158 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:48.158 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:48.158 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:48.159 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:48.159 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:48.159 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:48.159 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:48.159 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:48.159 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:48.159 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:48.159 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:48.417 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:48.417 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:48.417 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:48.417 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:48.417 15:10:46 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:48.417 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:48.417 fio-3.35 00:10:48.417 Starting 1 thread 00:10:52.608 00:10:52.608 test: (groupid=0, jobs=1): err= 0: pid=77655: Tue Oct 1 15:10:50 2024 00:10:52.608 read: IOPS=22.4k, BW=87.7MiB/s (91.9MB/s)(175MiB/2001msec) 00:10:52.608 slat (nsec): min=3804, max=75532, avg=4746.77, stdev=1194.12 00:10:52.608 clat (usec): min=249, max=13434, avg=2848.02, stdev=442.71 00:10:52.608 lat (usec): min=254, max=13509, avg=2852.76, stdev=443.32 00:10:52.608 clat percentiles (usec): 00:10:52.608 | 1.00th=[ 2606], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:10:52.608 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:10:52.608 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 3032], 00:10:52.608 | 99.00th=[ 4490], 99.50th=[ 5800], 99.90th=[ 8094], 99.95th=[10683], 00:10:52.608 | 99.99th=[13173] 00:10:52.608 bw ( KiB/s): min=84008, max=91272, per=98.60%, avg=88522.67, stdev=3940.65, samples=3 00:10:52.608 iops : min=21002, max=22818, avg=22130.67, stdev=985.16, samples=3 00:10:52.608 write: IOPS=22.3k, BW=87.1MiB/s (91.3MB/s)(174MiB/2001msec); 0 zone resets 00:10:52.608 slat (nsec): min=3911, max=40473, avg=4863.55, stdev=1130.58 00:10:52.608 clat (usec): min=212, max=13186, avg=2853.76, stdev=449.19 00:10:52.608 lat (usec): min=217, max=13199, avg=2858.63, stdev=449.77 00:10:52.608 clat percentiles (usec): 00:10:52.608 | 1.00th=[ 2606], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:10:52.608 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:10:52.608 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 3032], 00:10:52.608 | 99.00th=[ 4555], 99.50th=[ 5800], 99.90th=[ 8848], 99.95th=[10945], 00:10:52.608 | 99.99th=[12780] 00:10:52.608 bw ( KiB/s): min=83960, max=92128, per=99.43%, avg=88693.33, stdev=4236.03, samples=3 00:10:52.608 iops : min=20990, max=23032, avg=22173.33, stdev=1059.01, samples=3 00:10:52.608 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:52.608 lat (msec) : 2=0.05%, 4=98.58%, 10=1.27%, 20=0.07% 00:10:52.608 cpu : usr=99.30%, sys=0.15%, ctx=4, majf=0, minf=625 00:10:52.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:52.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.608 issued rwts: total=44912,44623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.608 00:10:52.608 Run status group 0 (all jobs): 00:10:52.608 READ: bw=87.7MiB/s (91.9MB/s), 87.7MiB/s-87.7MiB/s (91.9MB/s-91.9MB/s), io=175MiB (184MB), run=2001-2001msec 00:10:52.608 WRITE: bw=87.1MiB/s (91.3MB/s), 87.1MiB/s-87.1MiB/s (91.3MB/s-91.3MB/s), io=174MiB (183MB), run=2001-2001msec 00:10:52.868 ----------------------------------------------------- 00:10:52.868 Suppressions used: 00:10:52.868 count bytes template 00:10:52.868 1 32 /usr/src/fio/parse.c 00:10:52.868 1 8 libtcmalloc_minimal.so 00:10:52.868 ----------------------------------------------------- 00:10:52.868 00:10:52.868 15:10:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:52.868 15:10:51 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:52.868 00:10:52.868 real 0m19.187s 00:10:52.868 user 0m14.930s 00:10:52.868 sys 0m3.953s 00:10:52.868 15:10:51 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.868 15:10:51 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:52.868 ************************************ 00:10:52.868 END TEST nvme_fio 00:10:52.868 ************************************ 00:10:52.868 ************************************ 00:10:52.868 END TEST nvme 00:10:52.868 ************************************ 00:10:52.868 00:10:52.868 real 1m31.274s 00:10:52.868 user 3m31.121s 00:10:52.868 sys 0m23.312s 00:10:52.868 15:10:51 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.868 15:10:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.868 15:10:51 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:52.868 15:10:51 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:52.868 15:10:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:52.868 15:10:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.868 15:10:51 -- common/autotest_common.sh@10 -- # set +x 00:10:52.868 ************************************ 00:10:52.868 START TEST nvme_scc 00:10:52.868 ************************************ 00:10:52.868 15:10:51 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:53.128 * Looking for test storage... 00:10:53.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:53.128 15:10:51 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:53.128 15:10:51 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:53.128 15:10:51 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:53.128 15:10:51 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.128 15:10:51 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:53.128 15:10:51 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.128 15:10:51 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:53.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.128 --rc genhtml_branch_coverage=1 00:10:53.128 --rc genhtml_function_coverage=1 00:10:53.128 --rc genhtml_legend=1 00:10:53.128 --rc geninfo_all_blocks=1 00:10:53.128 --rc geninfo_unexecuted_blocks=1 00:10:53.128 00:10:53.128 ' 00:10:53.128 15:10:51 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:53.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.128 --rc genhtml_branch_coverage=1 00:10:53.128 --rc genhtml_function_coverage=1 00:10:53.128 --rc genhtml_legend=1 00:10:53.128 --rc geninfo_all_blocks=1 00:10:53.128 --rc geninfo_unexecuted_blocks=1 00:10:53.128 00:10:53.128 ' 00:10:53.128 15:10:51 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:53.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.128 --rc genhtml_branch_coverage=1 00:10:53.128 --rc genhtml_function_coverage=1 00:10:53.128 --rc genhtml_legend=1 00:10:53.128 --rc geninfo_all_blocks=1 00:10:53.128 --rc geninfo_unexecuted_blocks=1 00:10:53.128 00:10:53.128 ' 00:10:53.128 15:10:51 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:53.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.129 --rc genhtml_branch_coverage=1 00:10:53.129 --rc genhtml_function_coverage=1 00:10:53.129 --rc genhtml_legend=1 00:10:53.129 --rc geninfo_all_blocks=1 00:10:53.129 --rc geninfo_unexecuted_blocks=1 00:10:53.129 00:10:53.129 ' 00:10:53.129 15:10:51 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:53.129 15:10:51 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.129 15:10:51 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.129 15:10:51 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.129 15:10:51 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.129 15:10:51 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.129 15:10:51 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.129 15:10:51 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.129 15:10:51 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:53.129 15:10:51 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:53.129 15:10:51 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:53.129 15:10:51 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.129 15:10:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:53.129 15:10:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:53.129 15:10:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:53.129 15:10:51 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:53.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:53.957 Waiting for block devices as requested 00:10:53.957 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.216 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.216 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.476 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:59.797 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:59.797 15:10:57 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:59.797 15:10:57 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:59.797 15:10:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:59.797 15:10:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:59.797 15:10:57 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:59.797 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:59.798 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:59.799 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:59.800 15:10:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:59.800 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.801 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:59.802 15:10:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:59.802 15:10:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:59.802 15:10:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:59.802 15:10:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.802 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:59.803 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.804 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.805 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.806 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:59.807 15:10:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:59.807 15:10:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:59.807 15:10:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:59.807 15:10:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.807 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.808 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:59.809 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:59.810 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:59.811 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:59.812 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.813 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.814 15:10:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:59.815 15:10:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:59.815 15:10:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:59.815 15:10:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:59.815 15:10:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.815 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:59.816 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:00.077 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:00.077 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:00.079 15:10:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:00.079 15:10:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:00.080 15:10:58 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:00.080 15:10:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:00.080 15:10:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:00.080 15:10:58 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:00.684 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:01.624 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:01.624 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:01.624 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:01.624 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:01.624 15:11:00 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:01.624 15:11:00 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:01.624 15:11:00 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.624 15:11:00 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:01.624 ************************************ 00:11:01.624 START TEST nvme_simple_copy 00:11:01.624 ************************************ 00:11:01.624 15:11:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:01.883 Initializing NVMe Controllers 00:11:01.883 Attaching to 0000:00:10.0 00:11:01.883 Controller supports SCC. Attached to 0000:00:10.0 00:11:01.883 Namespace ID: 1 size: 6GB 00:11:01.883 Initialization complete. 00:11:01.883 00:11:01.883 Controller QEMU NVMe Ctrl (12340 ) 00:11:01.883 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:01.883 Namespace Block Size:4096 00:11:01.883 Writing LBAs 0 to 63 with Random Data 00:11:01.883 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:01.883 LBAs matching Written Data: 64 00:11:01.883 ************************************ 00:11:01.883 END TEST nvme_simple_copy 00:11:01.883 ************************************ 00:11:01.883 00:11:01.883 real 0m0.282s 00:11:01.883 user 0m0.095s 00:11:01.883 sys 0m0.086s 00:11:01.883 15:11:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.883 15:11:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:01.883 ************************************ 00:11:01.883 END TEST nvme_scc 00:11:01.883 ************************************ 00:11:01.883 00:11:01.883 real 0m9.062s 00:11:01.883 user 0m1.566s 00:11:01.883 sys 0m2.444s 00:11:01.883 15:11:00 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.883 15:11:00 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:02.143 15:11:00 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:02.143 15:11:00 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:02.143 15:11:00 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:02.143 15:11:00 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:02.143 15:11:00 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:02.143 15:11:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:02.143 15:11:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.143 15:11:00 -- common/autotest_common.sh@10 -- # set +x 00:11:02.143 ************************************ 00:11:02.143 START TEST nvme_fdp 00:11:02.143 ************************************ 00:11:02.143 15:11:00 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:11:02.143 * Looking for test storage... 00:11:02.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:02.143 15:11:00 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:02.143 15:11:00 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:11:02.143 15:11:00 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:02.143 15:11:00 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:02.143 15:11:00 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.144 15:11:00 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.404 15:11:00 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:02.404 15:11:00 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.404 15:11:00 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:02.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.404 --rc genhtml_branch_coverage=1 00:11:02.404 --rc genhtml_function_coverage=1 00:11:02.404 --rc genhtml_legend=1 00:11:02.404 --rc geninfo_all_blocks=1 00:11:02.404 --rc geninfo_unexecuted_blocks=1 00:11:02.404 00:11:02.404 ' 00:11:02.404 15:11:00 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:02.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.404 --rc genhtml_branch_coverage=1 00:11:02.404 --rc genhtml_function_coverage=1 00:11:02.404 --rc genhtml_legend=1 00:11:02.404 --rc geninfo_all_blocks=1 00:11:02.404 --rc geninfo_unexecuted_blocks=1 00:11:02.404 00:11:02.404 ' 00:11:02.404 15:11:00 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:02.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.404 --rc genhtml_branch_coverage=1 00:11:02.404 --rc genhtml_function_coverage=1 00:11:02.404 --rc genhtml_legend=1 00:11:02.404 --rc geninfo_all_blocks=1 00:11:02.404 --rc geninfo_unexecuted_blocks=1 00:11:02.404 00:11:02.404 ' 00:11:02.404 15:11:00 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:02.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.404 --rc genhtml_branch_coverage=1 00:11:02.404 --rc genhtml_function_coverage=1 00:11:02.404 --rc genhtml_legend=1 00:11:02.405 --rc geninfo_all_blocks=1 00:11:02.405 --rc geninfo_unexecuted_blocks=1 00:11:02.405 00:11:02.405 ' 00:11:02.405 15:11:00 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.405 15:11:00 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.405 15:11:00 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.405 15:11:00 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.405 15:11:00 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.405 15:11:00 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.405 15:11:00 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.405 15:11:00 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.405 15:11:00 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:02.405 15:11:00 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:02.405 15:11:00 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:02.405 15:11:00 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:02.405 15:11:00 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:02.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:02.974 Waiting for block devices as requested 00:11:03.233 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:03.233 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:03.493 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:03.493 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.774 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:08.774 15:11:06 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:08.774 15:11:06 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:08.774 15:11:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:08.774 15:11:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:08.774 15:11:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:08.774 15:11:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:08.774 15:11:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:08.774 15:11:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:08.774 15:11:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:08.774 15:11:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.774 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.775 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:08.776 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:08.777 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:08.778 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:08.779 15:11:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:08.779 15:11:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:08.779 15:11:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:08.779 15:11:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:08.779 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.780 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.781 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:08.782 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:08.783 15:11:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:08.783 15:11:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:08.783 15:11:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:08.783 15:11:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.783 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.784 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.785 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.786 15:11:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:08.787 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:08.788 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.789 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:08.790 15:11:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.052 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:09.053 15:11:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:09.053 15:11:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:09.053 15:11:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:09.053 15:11:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:09.053 15:11:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:09.054 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:09.055 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:09.056 15:11:07 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:09.056 15:11:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:09.057 15:11:07 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:09.057 15:11:07 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:09.057 15:11:07 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:09.057 15:11:07 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:09.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:10.611 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:10.611 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:10.611 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:10.611 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:10.611 15:11:09 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:10.611 15:11:09 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:10.611 15:11:09 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.611 15:11:09 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:10.611 ************************************ 00:11:10.611 START TEST nvme_flexible_data_placement 00:11:10.611 ************************************ 00:11:10.611 15:11:09 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:10.871 Initializing NVMe Controllers 00:11:10.871 Attaching to 0000:00:13.0 00:11:10.871 Controller supports FDP Attached to 0000:00:13.0 00:11:10.871 Namespace ID: 1 Endurance Group ID: 1 00:11:10.871 Initialization complete. 00:11:10.871 00:11:10.871 ================================== 00:11:10.871 == FDP tests for Namespace: #01 == 00:11:10.871 ================================== 00:11:10.871 00:11:10.871 Get Feature: FDP: 00:11:10.871 ================= 00:11:10.871 Enabled: Yes 00:11:10.871 FDP configuration Index: 0 00:11:10.871 00:11:10.871 FDP configurations log page 00:11:10.871 =========================== 00:11:10.871 Number of FDP configurations: 1 00:11:10.871 Version: 0 00:11:10.871 Size: 112 00:11:10.871 FDP Configuration Descriptor: 0 00:11:10.871 Descriptor Size: 96 00:11:10.871 Reclaim Group Identifier format: 2 00:11:10.871 FDP Volatile Write Cache: Not Present 00:11:10.871 FDP Configuration: Valid 00:11:10.871 Vendor Specific Size: 0 00:11:10.871 Number of Reclaim Groups: 2 00:11:10.871 Number of Recalim Unit Handles: 8 00:11:10.871 Max Placement Identifiers: 128 00:11:10.871 Number of Namespaces Suppprted: 256 00:11:10.871 Reclaim unit Nominal Size: 6000000 bytes 00:11:10.871 Estimated Reclaim Unit Time Limit: Not Reported 00:11:10.871 RUH Desc #000: RUH Type: Initially Isolated 00:11:10.871 RUH Desc #001: RUH Type: Initially Isolated 00:11:10.871 RUH Desc #002: RUH Type: Initially Isolated 00:11:10.871 RUH Desc #003: RUH Type: Initially Isolated 00:11:10.871 RUH Desc #004: RUH Type: Initially Isolated 00:11:10.871 RUH Desc #005: RUH Type: Initially Isolated 00:11:10.871 RUH Desc #006: RUH Type: Initially Isolated 00:11:10.871 RUH Desc #007: RUH Type: Initially Isolated 00:11:10.871 00:11:10.871 FDP reclaim unit handle usage log page 00:11:10.871 ====================================== 00:11:10.871 Number of Reclaim Unit Handles: 8 00:11:10.871 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:10.871 RUH Usage Desc #001: RUH Attributes: Unused 00:11:10.871 RUH Usage Desc #002: RUH Attributes: Unused 00:11:10.871 RUH Usage Desc #003: RUH Attributes: Unused 00:11:10.871 RUH Usage Desc #004: RUH Attributes: Unused 00:11:10.871 RUH Usage Desc #005: RUH Attributes: Unused 00:11:10.871 RUH Usage Desc #006: RUH Attributes: Unused 00:11:10.871 RUH Usage Desc #007: RUH Attributes: Unused 00:11:10.871 00:11:10.871 FDP statistics log page 00:11:10.871 ======================= 00:11:10.871 Host bytes with metadata written: 1459331072 00:11:10.871 Media bytes with metadata written: 1460006912 00:11:10.871 Media bytes erased: 0 00:11:10.872 00:11:10.872 FDP Reclaim unit handle status 00:11:10.872 ============================== 00:11:10.872 Number of RUHS descriptors: 2 00:11:10.872 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003046 00:11:10.872 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:10.872 00:11:10.872 FDP write on placement id: 0 success 00:11:10.872 00:11:10.872 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:10.872 00:11:10.872 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:10.872 00:11:10.872 Get Feature: FDP Events for Placement handle: #0 00:11:10.872 ======================== 00:11:10.872 Number of FDP Events: 6 00:11:10.872 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:10.872 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:10.872 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:10.872 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:10.872 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:10.872 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:10.872 00:11:10.872 FDP events log page 00:11:10.872 =================== 00:11:10.872 Number of FDP events: 1 00:11:10.872 FDP Event #0: 00:11:10.872 Event Type: RU Not Written to Capacity 00:11:10.872 Placement Identifier: Valid 00:11:10.872 NSID: Valid 00:11:10.872 Location: Valid 00:11:10.872 Placement Identifier: 0 00:11:10.872 Event Timestamp: 3 00:11:10.872 Namespace Identifier: 1 00:11:10.872 Reclaim Group Identifier: 0 00:11:10.872 Reclaim Unit Handle Identifier: 0 00:11:10.872 00:11:10.872 FDP test passed 00:11:10.872 00:11:10.872 real 0m0.250s 00:11:10.872 user 0m0.064s 00:11:10.872 sys 0m0.084s 00:11:10.872 15:11:09 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.872 ************************************ 00:11:10.872 END TEST nvme_flexible_data_placement 00:11:10.872 ************************************ 00:11:10.872 15:11:09 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:11.132 ************************************ 00:11:11.132 END TEST nvme_fdp 00:11:11.132 ************************************ 00:11:11.132 00:11:11.132 real 0m8.966s 00:11:11.132 user 0m1.544s 00:11:11.132 sys 0m2.438s 00:11:11.132 15:11:09 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.132 15:11:09 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:11.132 15:11:09 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:11.132 15:11:09 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:11.132 15:11:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:11.132 15:11:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.132 15:11:09 -- common/autotest_common.sh@10 -- # set +x 00:11:11.132 ************************************ 00:11:11.132 START TEST nvme_rpc 00:11:11.132 ************************************ 00:11:11.132 15:11:09 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:11.132 * Looking for test storage... 00:11:11.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:11.132 15:11:09 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:11.132 15:11:09 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:11.132 15:11:09 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.392 15:11:09 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:11.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.392 --rc genhtml_branch_coverage=1 00:11:11.392 --rc genhtml_function_coverage=1 00:11:11.392 --rc genhtml_legend=1 00:11:11.392 --rc geninfo_all_blocks=1 00:11:11.392 --rc geninfo_unexecuted_blocks=1 00:11:11.392 00:11:11.392 ' 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:11.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.392 --rc genhtml_branch_coverage=1 00:11:11.392 --rc genhtml_function_coverage=1 00:11:11.392 --rc genhtml_legend=1 00:11:11.392 --rc geninfo_all_blocks=1 00:11:11.392 --rc geninfo_unexecuted_blocks=1 00:11:11.392 00:11:11.392 ' 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:11.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.392 --rc genhtml_branch_coverage=1 00:11:11.392 --rc genhtml_function_coverage=1 00:11:11.392 --rc genhtml_legend=1 00:11:11.392 --rc geninfo_all_blocks=1 00:11:11.392 --rc geninfo_unexecuted_blocks=1 00:11:11.392 00:11:11.392 ' 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:11.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.392 --rc genhtml_branch_coverage=1 00:11:11.392 --rc genhtml_function_coverage=1 00:11:11.392 --rc genhtml_legend=1 00:11:11.392 --rc geninfo_all_blocks=1 00:11:11.392 --rc geninfo_unexecuted_blocks=1 00:11:11.392 00:11:11.392 ' 00:11:11.392 15:11:09 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:11.392 15:11:09 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:11:11.392 15:11:09 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:11.392 15:11:09 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=79041 00:11:11.392 15:11:09 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:11.392 15:11:09 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:11.392 15:11:09 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 79041 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 79041 ']' 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.392 15:11:09 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.651 [2024-10-01 15:11:09.985794] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:11:11.651 [2024-10-01 15:11:09.986092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79041 ] 00:11:11.651 [2024-10-01 15:11:10.156839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:11.910 [2024-10-01 15:11:10.210482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.910 [2024-10-01 15:11:10.210582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.477 15:11:10 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.477 15:11:10 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:12.477 15:11:10 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:12.737 Nvme0n1 00:11:12.737 15:11:11 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:12.737 15:11:11 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:12.737 request: 00:11:12.737 { 00:11:12.737 "bdev_name": "Nvme0n1", 00:11:12.737 "filename": "non_existing_file", 00:11:12.737 "method": "bdev_nvme_apply_firmware", 00:11:12.737 "req_id": 1 00:11:12.737 } 00:11:12.737 Got JSON-RPC error response 00:11:12.737 response: 00:11:12.737 { 00:11:12.737 "code": -32603, 00:11:12.737 "message": "open file failed." 00:11:12.737 } 00:11:12.737 15:11:11 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:12.737 15:11:11 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:12.737 15:11:11 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:12.996 15:11:11 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:12.996 15:11:11 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 79041 00:11:12.996 15:11:11 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 79041 ']' 00:11:12.996 15:11:11 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 79041 00:11:12.996 15:11:11 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:11:12.996 15:11:11 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:12.996 15:11:11 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79041 00:11:12.996 killing process with pid 79041 00:11:12.996 15:11:11 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:12.996 15:11:11 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:12.996 15:11:11 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79041' 00:11:12.996 15:11:11 nvme_rpc -- common/autotest_common.sh@969 -- # kill 79041 00:11:12.996 15:11:11 nvme_rpc -- common/autotest_common.sh@974 -- # wait 79041 00:11:13.565 ************************************ 00:11:13.565 END TEST nvme_rpc 00:11:13.565 ************************************ 00:11:13.565 00:11:13.565 real 0m2.408s 00:11:13.565 user 0m4.249s 00:11:13.565 sys 0m0.736s 00:11:13.565 15:11:11 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.565 15:11:11 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.565 15:11:11 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:13.565 15:11:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:13.565 15:11:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.565 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:11:13.565 ************************************ 00:11:13.565 START TEST nvme_rpc_timeouts 00:11:13.565 ************************************ 00:11:13.565 15:11:11 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:13.825 * Looking for test storage... 00:11:13.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.825 15:11:12 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:13.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.825 --rc genhtml_branch_coverage=1 00:11:13.825 --rc genhtml_function_coverage=1 00:11:13.825 --rc genhtml_legend=1 00:11:13.825 --rc geninfo_all_blocks=1 00:11:13.825 --rc geninfo_unexecuted_blocks=1 00:11:13.825 00:11:13.825 ' 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:13.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.825 --rc genhtml_branch_coverage=1 00:11:13.825 --rc genhtml_function_coverage=1 00:11:13.825 --rc genhtml_legend=1 00:11:13.825 --rc geninfo_all_blocks=1 00:11:13.825 --rc geninfo_unexecuted_blocks=1 00:11:13.825 00:11:13.825 ' 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:13.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.825 --rc genhtml_branch_coverage=1 00:11:13.825 --rc genhtml_function_coverage=1 00:11:13.825 --rc genhtml_legend=1 00:11:13.825 --rc geninfo_all_blocks=1 00:11:13.825 --rc geninfo_unexecuted_blocks=1 00:11:13.825 00:11:13.825 ' 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:13.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.825 --rc genhtml_branch_coverage=1 00:11:13.825 --rc genhtml_function_coverage=1 00:11:13.825 --rc genhtml_legend=1 00:11:13.825 --rc geninfo_all_blocks=1 00:11:13.825 --rc geninfo_unexecuted_blocks=1 00:11:13.825 00:11:13.825 ' 00:11:13.825 15:11:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:13.825 15:11:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_79095 00:11:13.825 15:11:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_79095 00:11:13.825 15:11:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79127 00:11:13.825 15:11:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:13.825 15:11:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:13.825 15:11:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79127 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 79127 ']' 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.825 15:11:12 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:13.825 [2024-10-01 15:11:12.345940] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:11:13.825 [2024-10-01 15:11:12.346776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79127 ] 00:11:14.085 [2024-10-01 15:11:12.515986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:14.085 [2024-10-01 15:11:12.564606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.085 [2024-10-01 15:11:12.564693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.652 15:11:13 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.652 Checking default timeout settings: 00:11:14.652 15:11:13 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:11:14.652 15:11:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:14.652 15:11:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:15.221 Making settings changes with rpc: 00:11:15.221 15:11:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:15.221 15:11:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:15.479 Check default vs. modified settings: 00:11:15.479 15:11:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:15.479 15:11:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_79095 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_79095 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:15.738 Setting action_on_timeout is changed as expected. 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_79095 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_79095 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:15.738 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:15.738 Setting timeout_us is changed as expected. 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_79095 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_79095 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:15.739 Setting timeout_admin_us is changed as expected. 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_79095 /tmp/settings_modified_79095 00:11:15.739 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79127 00:11:15.739 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 79127 ']' 00:11:15.739 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 79127 00:11:15.739 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:11:15.739 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.739 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79127 00:11:15.739 killing process with pid 79127 00:11:15.739 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.739 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.739 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79127' 00:11:15.739 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 79127 00:11:15.739 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 79127 00:11:16.306 RPC TIMEOUT SETTING TEST PASSED. 00:11:16.306 15:11:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:16.306 ************************************ 00:11:16.306 END TEST nvme_rpc_timeouts 00:11:16.306 ************************************ 00:11:16.306 00:11:16.306 real 0m2.689s 00:11:16.306 user 0m5.152s 00:11:16.306 sys 0m0.755s 00:11:16.306 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.306 15:11:14 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:16.306 15:11:14 -- spdk/autotest.sh@239 -- # uname -s 00:11:16.306 15:11:14 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:16.306 15:11:14 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:16.306 15:11:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:16.306 15:11:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.306 15:11:14 -- common/autotest_common.sh@10 -- # set +x 00:11:16.306 ************************************ 00:11:16.306 START TEST sw_hotplug 00:11:16.306 ************************************ 00:11:16.306 15:11:14 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:16.565 * Looking for test storage... 00:11:16.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:16.565 15:11:14 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:16.565 15:11:14 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:11:16.565 15:11:14 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:16.565 15:11:14 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.565 15:11:14 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:16.565 15:11:14 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.565 15:11:14 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:16.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.565 --rc genhtml_branch_coverage=1 00:11:16.565 --rc genhtml_function_coverage=1 00:11:16.565 --rc genhtml_legend=1 00:11:16.565 --rc geninfo_all_blocks=1 00:11:16.565 --rc geninfo_unexecuted_blocks=1 00:11:16.565 00:11:16.565 ' 00:11:16.565 15:11:14 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:16.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.565 --rc genhtml_branch_coverage=1 00:11:16.565 --rc genhtml_function_coverage=1 00:11:16.565 --rc genhtml_legend=1 00:11:16.565 --rc geninfo_all_blocks=1 00:11:16.565 --rc geninfo_unexecuted_blocks=1 00:11:16.565 00:11:16.565 ' 00:11:16.565 15:11:14 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:16.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.565 --rc genhtml_branch_coverage=1 00:11:16.565 --rc genhtml_function_coverage=1 00:11:16.565 --rc genhtml_legend=1 00:11:16.565 --rc geninfo_all_blocks=1 00:11:16.565 --rc geninfo_unexecuted_blocks=1 00:11:16.565 00:11:16.565 ' 00:11:16.565 15:11:14 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:16.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.565 --rc genhtml_branch_coverage=1 00:11:16.565 --rc genhtml_function_coverage=1 00:11:16.565 --rc genhtml_legend=1 00:11:16.565 --rc geninfo_all_blocks=1 00:11:16.565 --rc geninfo_unexecuted_blocks=1 00:11:16.565 00:11:16.565 ' 00:11:16.565 15:11:14 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:17.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.393 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:17.393 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:17.393 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:17.393 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:17.393 15:11:15 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:17.393 15:11:15 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:17.393 15:11:15 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:17.393 15:11:15 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:17.393 15:11:15 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:17.394 15:11:15 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:17.394 15:11:15 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:17.394 15:11:15 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:17.394 15:11:15 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:17.394 15:11:15 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:17.394 15:11:15 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:17.394 15:11:15 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:17.394 15:11:15 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:17.394 15:11:15 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:17.394 15:11:15 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:18.219 Waiting for block devices as requested 00:11:18.478 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.478 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.736 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.736 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.008 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:24.008 15:11:22 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:24.008 15:11:22 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:24.576 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:24.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:24.576 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:24.834 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:25.092 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.092 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:25.352 15:11:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79989 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:25.352 15:11:23 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:25.352 15:11:23 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:25.352 15:11:23 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:25.352 15:11:23 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:25.352 15:11:23 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:25.352 15:11:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:25.612 Initializing NVMe Controllers 00:11:25.612 Attaching to 0000:00:10.0 00:11:25.612 Attaching to 0000:00:11.0 00:11:25.612 Attached to 0000:00:10.0 00:11:25.612 Attached to 0000:00:11.0 00:11:25.612 Initialization complete. Starting I/O... 00:11:25.612 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:25.612 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:25.612 00:11:26.990 QEMU NVMe Ctrl (12340 ): 1608 I/Os completed (+1608) 00:11:26.990 QEMU NVMe Ctrl (12341 ): 1609 I/Os completed (+1609) 00:11:26.990 00:11:27.579 QEMU NVMe Ctrl (12340 ): 3707 I/Os completed (+2099) 00:11:27.579 QEMU NVMe Ctrl (12341 ): 3714 I/Os completed (+2105) 00:11:27.579 00:11:28.955 QEMU NVMe Ctrl (12340 ): 6099 I/Os completed (+2392) 00:11:28.955 QEMU NVMe Ctrl (12341 ): 6108 I/Os completed (+2394) 00:11:28.955 00:11:29.890 QEMU NVMe Ctrl (12340 ): 8451 I/Os completed (+2352) 00:11:29.890 QEMU NVMe Ctrl (12341 ): 8460 I/Os completed (+2352) 00:11:29.890 00:11:30.824 QEMU NVMe Ctrl (12340 ): 10767 I/Os completed (+2316) 00:11:30.824 QEMU NVMe Ctrl (12341 ): 10776 I/Os completed (+2316) 00:11:30.824 00:11:31.439 15:11:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.439 15:11:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.439 15:11:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.439 [2024-10-01 15:11:29.885182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:31.439 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:31.439 [2024-10-01 15:11:29.887063] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.439 [2024-10-01 15:11:29.887256] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.439 [2024-10-01 15:11:29.887311] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.439 [2024-10-01 15:11:29.887414] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.439 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:31.439 [2024-10-01 15:11:29.891818] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.439 [2024-10-01 15:11:29.891952] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.439 [2024-10-01 15:11:29.892001] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.439 [2024-10-01 15:11:29.892090] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.439 15:11:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.439 15:11:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.439 [2024-10-01 15:11:29.929624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:31.439 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:31.439 [2024-10-01 15:11:29.931096] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.439 [2024-10-01 15:11:29.931140] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.439 [2024-10-01 15:11:29.931162] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.440 [2024-10-01 15:11:29.931191] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.440 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:31.440 [2024-10-01 15:11:29.932808] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.440 [2024-10-01 15:11:29.932841] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.440 [2024-10-01 15:11:29.932864] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.440 [2024-10-01 15:11:29.932882] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.440 15:11:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:31.440 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:31.440 EAL: Scan for (pci) bus failed. 00:11:31.440 15:11:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:31.698 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.698 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.698 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.698 00:11:31.698 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.698 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.698 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.698 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.698 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.698 Attaching to 0000:00:10.0 00:11:31.698 Attached to 0000:00:10.0 00:11:31.957 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.957 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.957 15:11:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:31.957 Attaching to 0000:00:11.0 00:11:31.957 Attached to 0000:00:11.0 00:11:32.895 QEMU NVMe Ctrl (12340 ): 2208 I/Os completed (+2208) 00:11:32.895 QEMU NVMe Ctrl (12341 ): 1956 I/Os completed (+1956) 00:11:32.895 00:11:33.829 QEMU NVMe Ctrl (12340 ): 4628 I/Os completed (+2420) 00:11:33.829 QEMU NVMe Ctrl (12341 ): 4376 I/Os completed (+2420) 00:11:33.829 00:11:34.764 QEMU NVMe Ctrl (12340 ): 7044 I/Os completed (+2416) 00:11:34.764 QEMU NVMe Ctrl (12341 ): 6792 I/Os completed (+2416) 00:11:34.764 00:11:35.701 QEMU NVMe Ctrl (12340 ): 9452 I/Os completed (+2408) 00:11:35.701 QEMU NVMe Ctrl (12341 ): 9202 I/Os completed (+2410) 00:11:35.701 00:11:36.638 QEMU NVMe Ctrl (12340 ): 11884 I/Os completed (+2432) 00:11:36.638 QEMU NVMe Ctrl (12341 ): 11634 I/Os completed (+2432) 00:11:36.638 00:11:37.578 QEMU NVMe Ctrl (12340 ): 14226 I/Os completed (+2342) 00:11:37.578 QEMU NVMe Ctrl (12341 ): 14022 I/Os completed (+2388) 00:11:37.578 00:11:38.957 QEMU NVMe Ctrl (12340 ): 16590 I/Os completed (+2364) 00:11:38.957 QEMU NVMe Ctrl (12341 ): 16386 I/Os completed (+2364) 00:11:38.957 00:11:39.896 QEMU NVMe Ctrl (12340 ): 18894 I/Os completed (+2304) 00:11:39.896 QEMU NVMe Ctrl (12341 ): 18692 I/Os completed (+2306) 00:11:39.896 00:11:40.851 QEMU NVMe Ctrl (12340 ): 21272 I/Os completed (+2378) 00:11:40.851 QEMU NVMe Ctrl (12341 ): 21081 I/Os completed (+2389) 00:11:40.851 00:11:41.788 QEMU NVMe Ctrl (12340 ): 23538 I/Os completed (+2266) 00:11:41.788 QEMU NVMe Ctrl (12341 ): 23374 I/Os completed (+2293) 00:11:41.788 00:11:42.724 QEMU NVMe Ctrl (12340 ): 25688 I/Os completed (+2150) 00:11:42.724 QEMU NVMe Ctrl (12341 ): 25568 I/Os completed (+2194) 00:11:42.724 00:11:43.661 QEMU NVMe Ctrl (12340 ): 27972 I/Os completed (+2284) 00:11:43.661 QEMU NVMe Ctrl (12341 ): 27907 I/Os completed (+2339) 00:11:43.661 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.921 [2024-10-01 15:11:42.286524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:43.921 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:43.921 [2024-10-01 15:11:42.288231] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.288411] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.288488] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.288619] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:43.921 [2024-10-01 15:11:42.290762] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.290896] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.290947] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.291057] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:11:43.921 EAL: Scan for (pci) bus failed. 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.921 [2024-10-01 15:11:42.319903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:43.921 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:43.921 [2024-10-01 15:11:42.321577] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.321727] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.321769] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.321787] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:43.921 [2024-10-01 15:11:42.323468] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.323504] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.323527] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 [2024-10-01 15:11:42.323544] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:43.921 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:43.921 EAL: Scan for (pci) bus failed. 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.921 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:44.181 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:44.181 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.181 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.181 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.181 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:44.181 Attaching to 0000:00:10.0 00:11:44.181 Attached to 0000:00:10.0 00:11:44.181 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:44.181 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.181 15:11:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:44.181 Attaching to 0000:00:11.0 00:11:44.181 Attached to 0000:00:11.0 00:11:44.750 QEMU NVMe Ctrl (12340 ): 1248 I/Os completed (+1248) 00:11:44.750 QEMU NVMe Ctrl (12341 ): 972 I/Os completed (+972) 00:11:44.750 00:11:45.689 QEMU NVMe Ctrl (12340 ): 3608 I/Os completed (+2360) 00:11:45.689 QEMU NVMe Ctrl (12341 ): 3332 I/Os completed (+2360) 00:11:45.689 00:11:46.649 QEMU NVMe Ctrl (12340 ): 5900 I/Os completed (+2292) 00:11:46.649 QEMU NVMe Ctrl (12341 ): 5625 I/Os completed (+2293) 00:11:46.649 00:11:47.586 QEMU NVMe Ctrl (12340 ): 8231 I/Os completed (+2331) 00:11:47.586 QEMU NVMe Ctrl (12341 ): 7961 I/Os completed (+2336) 00:11:47.586 00:11:48.523 QEMU NVMe Ctrl (12340 ): 10463 I/Os completed (+2232) 00:11:48.523 QEMU NVMe Ctrl (12341 ): 10197 I/Os completed (+2236) 00:11:48.523 00:11:49.902 QEMU NVMe Ctrl (12340 ): 12765 I/Os completed (+2302) 00:11:49.902 QEMU NVMe Ctrl (12341 ): 12495 I/Os completed (+2298) 00:11:49.902 00:11:50.851 QEMU NVMe Ctrl (12340 ): 14953 I/Os completed (+2188) 00:11:50.851 QEMU NVMe Ctrl (12341 ): 14686 I/Os completed (+2191) 00:11:50.851 00:11:51.790 QEMU NVMe Ctrl (12340 ): 17244 I/Os completed (+2291) 00:11:51.790 QEMU NVMe Ctrl (12341 ): 16969 I/Os completed (+2283) 00:11:51.790 00:11:52.726 QEMU NVMe Ctrl (12340 ): 19544 I/Os completed (+2300) 00:11:52.726 QEMU NVMe Ctrl (12341 ): 19269 I/Os completed (+2300) 00:11:52.726 00:11:53.664 QEMU NVMe Ctrl (12340 ): 21896 I/Os completed (+2352) 00:11:53.664 QEMU NVMe Ctrl (12341 ): 21621 I/Os completed (+2352) 00:11:53.664 00:11:54.601 QEMU NVMe Ctrl (12340 ): 24256 I/Os completed (+2360) 00:11:54.602 QEMU NVMe Ctrl (12341 ): 23981 I/Os completed (+2360) 00:11:54.602 00:11:55.555 QEMU NVMe Ctrl (12340 ): 26580 I/Os completed (+2324) 00:11:55.556 QEMU NVMe Ctrl (12341 ): 26305 I/Os completed (+2324) 00:11:55.556 00:11:56.124 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:56.124 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.124 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.124 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.125 [2024-10-01 15:11:54.661183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:56.125 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:56.125 [2024-10-01 15:11:54.662901] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.125 [2024-10-01 15:11:54.663063] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.125 [2024-10-01 15:11:54.663202] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.125 [2024-10-01 15:11:54.663260] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.125 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:56.125 [2024-10-01 15:11:54.665453] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.125 [2024-10-01 15:11:54.665578] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.125 [2024-10-01 15:11:54.665628] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.125 [2024-10-01 15:11:54.665698] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.384 [2024-10-01 15:11:54.702012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:56.384 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:56.384 [2024-10-01 15:11:54.706066] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.384 [2024-10-01 15:11:54.706241] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.384 [2024-10-01 15:11:54.706301] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.384 [2024-10-01 15:11:54.706341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.384 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:56.384 [2024-10-01 15:11:54.707999] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.384 [2024-10-01 15:11:54.708069] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.384 [2024-10-01 15:11:54.708113] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.384 [2024-10-01 15:11:54.708217] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.384 15:11:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:56.384 Attaching to 0000:00:10.0 00:11:56.384 Attached to 0000:00:10.0 00:11:56.643 15:11:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:56.643 15:11:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.643 15:11:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:56.643 Attaching to 0000:00:11.0 00:11:56.643 Attached to 0000:00:11.0 00:11:56.643 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:56.643 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:56.643 [2024-10-01 15:11:55.041148] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:08.883 15:12:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:08.883 15:12:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.883 15:12:07 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.16 00:12:08.883 15:12:07 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.16 00:12:08.883 15:12:07 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:08.883 15:12:07 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.16 00:12:08.883 15:12:07 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.16 2 00:12:08.883 remove_attach_helper took 43.16s to complete (handling 2 nvme drive(s)) 15:12:07 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:15.455 15:12:13 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79989 00:12:15.455 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79989) - No such process 00:12:15.455 15:12:13 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79989 00:12:15.455 15:12:13 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:15.455 15:12:13 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:15.455 15:12:13 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:15.455 15:12:13 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80538 00:12:15.455 15:12:13 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:15.455 15:12:13 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:15.455 15:12:13 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80538 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 80538 ']' 00:12:15.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.455 [2024-10-01 15:12:13.156726] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:12:15.455 [2024-10-01 15:12:13.157055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80538 ] 00:12:15.455 [2024-10-01 15:12:13.320473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.455 [2024-10-01 15:12:13.367342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:12:15.455 15:12:13 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.455 15:12:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.455 15:12:14 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:15.455 15:12:14 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:15.714 15:12:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:15.714 15:12:14 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:15.714 15:12:14 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:15.714 15:12:14 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:15.714 15:12:14 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:15.714 15:12:14 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:15.714 15:12:14 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:15.714 15:12:14 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:15.714 15:12:14 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:15.714 15:12:14 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:15.714 15:12:14 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.290 15:12:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.290 15:12:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.290 [2024-10-01 15:12:20.086650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:22.290 [2024-10-01 15:12:20.088977] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-01 15:12:20.089021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-01 15:12:20.089042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 [2024-10-01 15:12:20.089070] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-01 15:12:20.089084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-01 15:12:20.089096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 [2024-10-01 15:12:20.089113] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-01 15:12:20.089124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-01 15:12:20.089139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 [2024-10-01 15:12:20.089151] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-01 15:12:20.089164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-01 15:12:20.089188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 15:12:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:22.290 [2024-10-01 15:12:20.485994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:22.290 [2024-10-01 15:12:20.488250] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-01 15:12:20.488310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-01 15:12:20.488338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 [2024-10-01 15:12:20.488370] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-01 15:12:20.488392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-01 15:12:20.488408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 [2024-10-01 15:12:20.488421] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-01 15:12:20.488435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-01 15:12:20.488447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 [2024-10-01 15:12:20.488465] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-01 15:12:20.488475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-01 15:12:20.488489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.290 15:12:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:12:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:12:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.290 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:22.549 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:22.549 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.549 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.549 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.549 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:22.549 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:22.549 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.549 15:12:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.781 15:12:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.781 15:12:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.781 15:12:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:34.781 [2024-10-01 15:12:33.065676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:34.781 [2024-10-01 15:12:33.068234] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.781 [2024-10-01 15:12:33.068283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.781 [2024-10-01 15:12:33.068312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.781 [2024-10-01 15:12:33.068341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.781 [2024-10-01 15:12:33.068365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.781 [2024-10-01 15:12:33.068381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.781 [2024-10-01 15:12:33.068397] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.781 [2024-10-01 15:12:33.068408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.781 [2024-10-01 15:12:33.068422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.781 [2024-10-01 15:12:33.068434] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.781 [2024-10-01 15:12:33.068447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.781 [2024-10-01 15:12:33.068458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.781 15:12:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.781 15:12:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.781 15:12:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:34.781 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:35.349 [2024-10-01 15:12:33.664714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:35.349 [2024-10-01 15:12:33.666838] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.349 [2024-10-01 15:12:33.666882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.349 [2024-10-01 15:12:33.666899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.349 [2024-10-01 15:12:33.666922] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.349 [2024-10-01 15:12:33.666934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.349 [2024-10-01 15:12:33.666949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.349 [2024-10-01 15:12:33.666962] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.349 [2024-10-01 15:12:33.666976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.349 [2024-10-01 15:12:33.666988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.349 [2024-10-01 15:12:33.667002] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.349 [2024-10-01 15:12:33.667013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.349 [2024-10-01 15:12:33.667027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:35.349 15:12:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.349 15:12:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.349 15:12:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:35.349 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:35.608 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:35.608 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:35.608 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:35.608 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:35.608 15:12:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:35.608 15:12:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:35.608 15:12:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:35.608 15:12:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:47.861 15:12:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.861 15:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:47.861 15:12:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:47.861 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:47.861 [2024-10-01 15:12:46.144597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:47.861 [2024-10-01 15:12:46.147000] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.861 [2024-10-01 15:12:46.147033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.861 [2024-10-01 15:12:46.147065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.861 [2024-10-01 15:12:46.147087] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.861 [2024-10-01 15:12:46.147104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.861 [2024-10-01 15:12:46.147116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.861 [2024-10-01 15:12:46.147131] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.861 [2024-10-01 15:12:46.147153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.861 [2024-10-01 15:12:46.147180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.861 [2024-10-01 15:12:46.147196] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:47.862 [2024-10-01 15:12:46.147212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.862 [2024-10-01 15:12:46.147224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.862 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:47.862 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:47.862 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:47.862 15:12:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.862 15:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:47.862 15:12:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.862 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:47.862 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:48.121 [2024-10-01 15:12:46.543951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:48.121 [2024-10-01 15:12:46.546268] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:48.121 [2024-10-01 15:12:46.546310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:48.121 [2024-10-01 15:12:46.546328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:48.121 [2024-10-01 15:12:46.546351] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:48.121 [2024-10-01 15:12:46.546363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:48.121 [2024-10-01 15:12:46.546380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:48.121 [2024-10-01 15:12:46.546393] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:48.121 [2024-10-01 15:12:46.546406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:48.121 [2024-10-01 15:12:46.546419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:48.121 [2024-10-01 15:12:46.546433] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:48.121 [2024-10-01 15:12:46.546444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:48.121 [2024-10-01 15:12:46.546458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:48.407 15:12:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.407 15:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:48.407 15:12:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:48.407 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:48.666 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:48.666 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:48.666 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:48.666 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:48.666 15:12:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:48.666 15:12:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:48.666 15:12:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:48.666 15:12:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.14 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.14 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.14 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.14 2 00:13:00.879 remove_attach_helper took 45.14s to complete (handling 2 nvme drive(s)) 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:00.879 15:12:59 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:00.879 15:12:59 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:07.474 15:13:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.474 15:13:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:07.474 [2024-10-01 15:13:05.263708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:07.474 [2024-10-01 15:13:05.265825] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.474 [2024-10-01 15:13:05.265875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.474 [2024-10-01 15:13:05.265898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.474 [2024-10-01 15:13:05.265943] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.474 [2024-10-01 15:13:05.265961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.474 [2024-10-01 15:13:05.265974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.474 [2024-10-01 15:13:05.265989] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.474 [2024-10-01 15:13:05.266000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.474 [2024-10-01 15:13:05.266017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.474 [2024-10-01 15:13:05.266029] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.474 [2024-10-01 15:13:05.266043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.474 [2024-10-01 15:13:05.266054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.474 15:13:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:07.474 [2024-10-01 15:13:05.663076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:07.474 [2024-10-01 15:13:05.665438] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.474 [2024-10-01 15:13:05.665483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.474 [2024-10-01 15:13:05.665516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.474 [2024-10-01 15:13:05.665539] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.474 [2024-10-01 15:13:05.665559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.474 [2024-10-01 15:13:05.665574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.474 [2024-10-01 15:13:05.665587] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.474 [2024-10-01 15:13:05.665600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.474 [2024-10-01 15:13:05.665612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.474 [2024-10-01 15:13:05.665627] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.474 [2024-10-01 15:13:05.665639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.474 [2024-10-01 15:13:05.665655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:07.474 15:13:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.474 15:13:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:07.474 15:13:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:07.474 15:13:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:07.734 15:13:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:07.734 15:13:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:07.734 15:13:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:07.734 15:13:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:07.734 15:13:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:07.734 15:13:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:07.734 15:13:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:07.734 15:13:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:20.010 15:13:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.010 15:13:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:20.010 15:13:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:20.010 15:13:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.010 15:13:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:20.010 [2024-10-01 15:13:18.342668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:20.010 [2024-10-01 15:13:18.345021] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.010 [2024-10-01 15:13:18.345063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.010 [2024-10-01 15:13:18.345085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.010 [2024-10-01 15:13:18.345106] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.010 [2024-10-01 15:13:18.345121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.010 [2024-10-01 15:13:18.345134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.010 [2024-10-01 15:13:18.345150] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.010 [2024-10-01 15:13:18.345162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.010 [2024-10-01 15:13:18.345189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.010 [2024-10-01 15:13:18.345203] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.010 [2024-10-01 15:13:18.345217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.010 [2024-10-01 15:13:18.345229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.010 15:13:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:20.010 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:20.269 [2024-10-01 15:13:18.742035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:20.269 [2024-10-01 15:13:18.744339] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.269 [2024-10-01 15:13:18.744392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.269 [2024-10-01 15:13:18.744410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.269 [2024-10-01 15:13:18.744433] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.269 [2024-10-01 15:13:18.744445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.269 [2024-10-01 15:13:18.744460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.269 [2024-10-01 15:13:18.744473] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.269 [2024-10-01 15:13:18.744487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.269 [2024-10-01 15:13:18.744500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.269 [2024-10-01 15:13:18.744515] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.269 [2024-10-01 15:13:18.744526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.269 [2024-10-01 15:13:18.744541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.528 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:20.528 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:20.528 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:20.528 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:20.528 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:20.528 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:20.528 15:13:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.528 15:13:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:20.528 15:13:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.528 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:20.528 15:13:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:20.528 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:20.528 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:20.528 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:20.787 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:20.787 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:20.787 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:20.787 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:20.787 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:20.787 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:20.787 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:20.787 15:13:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:32.990 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:32.990 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:32.990 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:32.990 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:32.990 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:32.990 15:13:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.990 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:32.990 15:13:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:32.990 15:13:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:32.991 [2024-10-01 15:13:31.321770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:32.991 [2024-10-01 15:13:31.323596] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.991 [2024-10-01 15:13:31.323639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:32.991 [2024-10-01 15:13:31.323663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:32.991 [2024-10-01 15:13:31.323685] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.991 [2024-10-01 15:13:31.323703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:32.991 [2024-10-01 15:13:31.323715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:32.991 [2024-10-01 15:13:31.323729] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.991 [2024-10-01 15:13:31.323740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:32.991 [2024-10-01 15:13:31.323754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:32.991 [2024-10-01 15:13:31.323766] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.991 [2024-10-01 15:13:31.323780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:32.991 [2024-10-01 15:13:31.323793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:32.991 15:13:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.991 15:13:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:32.991 15:13:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:32.991 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:33.558 [2024-10-01 15:13:31.820968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:33.558 [2024-10-01 15:13:31.823132] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.558 [2024-10-01 15:13:31.823196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.558 [2024-10-01 15:13:31.823215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.558 [2024-10-01 15:13:31.823238] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.558 [2024-10-01 15:13:31.823250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.558 [2024-10-01 15:13:31.823265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.558 [2024-10-01 15:13:31.823277] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.558 [2024-10-01 15:13:31.823294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.558 [2024-10-01 15:13:31.823306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.558 [2024-10-01 15:13:31.823320] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.558 [2024-10-01 15:13:31.823331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.558 [2024-10-01 15:13:31.823345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.558 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:33.558 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:33.558 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:33.558 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:33.558 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:33.558 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:33.558 15:13:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.558 15:13:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:33.558 15:13:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.558 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:33.558 15:13:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:33.558 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:33.558 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:33.558 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:33.817 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:33.817 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:33.817 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:33.817 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:33.817 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:33.817 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:33.817 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:33.817 15:13:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.16 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.16 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:13:46.026 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:46.026 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80538 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 80538 ']' 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 80538 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80538 00:13:46.026 killing process with pid 80538 00:13:46.026 15:13:44 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:46.027 15:13:44 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:46.027 15:13:44 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80538' 00:13:46.027 15:13:44 sw_hotplug -- common/autotest_common.sh@969 -- # kill 80538 00:13:46.027 15:13:44 sw_hotplug -- common/autotest_common.sh@974 -- # wait 80538 00:13:46.593 15:13:44 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:47.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:47.725 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:47.725 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:47.725 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.725 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.725 00:13:47.725 real 2m31.428s 00:13:47.725 user 1m47.965s 00:13:47.725 sys 0m23.692s 00:13:47.725 15:13:46 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.725 ************************************ 00:13:47.725 END TEST sw_hotplug 00:13:47.725 ************************************ 00:13:47.725 15:13:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:47.725 15:13:46 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:47.725 15:13:46 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:47.725 15:13:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:47.725 15:13:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.725 15:13:46 -- common/autotest_common.sh@10 -- # set +x 00:13:47.725 ************************************ 00:13:47.725 START TEST nvme_xnvme 00:13:47.725 ************************************ 00:13:47.725 15:13:46 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:47.983 * Looking for test storage... 00:13:47.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:47.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.983 --rc genhtml_branch_coverage=1 00:13:47.983 --rc genhtml_function_coverage=1 00:13:47.983 --rc genhtml_legend=1 00:13:47.983 --rc geninfo_all_blocks=1 00:13:47.983 --rc geninfo_unexecuted_blocks=1 00:13:47.983 00:13:47.983 ' 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:47.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.983 --rc genhtml_branch_coverage=1 00:13:47.983 --rc genhtml_function_coverage=1 00:13:47.983 --rc genhtml_legend=1 00:13:47.983 --rc geninfo_all_blocks=1 00:13:47.983 --rc geninfo_unexecuted_blocks=1 00:13:47.983 00:13:47.983 ' 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:47.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.983 --rc genhtml_branch_coverage=1 00:13:47.983 --rc genhtml_function_coverage=1 00:13:47.983 --rc genhtml_legend=1 00:13:47.983 --rc geninfo_all_blocks=1 00:13:47.983 --rc geninfo_unexecuted_blocks=1 00:13:47.983 00:13:47.983 ' 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:47.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.983 --rc genhtml_branch_coverage=1 00:13:47.983 --rc genhtml_function_coverage=1 00:13:47.983 --rc genhtml_legend=1 00:13:47.983 --rc geninfo_all_blocks=1 00:13:47.983 --rc geninfo_unexecuted_blocks=1 00:13:47.983 00:13:47.983 ' 00:13:47.983 15:13:46 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.983 15:13:46 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.983 15:13:46 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.983 15:13:46 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.983 15:13:46 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.983 15:13:46 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:47.983 15:13:46 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.983 15:13:46 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.983 15:13:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.983 ************************************ 00:13:47.983 START TEST xnvme_to_malloc_dd_copy 00:13:47.983 ************************************ 00:13:47.983 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:13:47.983 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:47.983 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:47.983 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:48.241 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:48.241 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:48.241 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:48.241 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:48.241 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:48.241 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:48.241 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:48.242 15:13:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:48.242 { 00:13:48.242 "subsystems": [ 00:13:48.242 { 00:13:48.242 "subsystem": "bdev", 00:13:48.242 "config": [ 00:13:48.242 { 00:13:48.242 "params": { 00:13:48.242 "block_size": 512, 00:13:48.242 "num_blocks": 2097152, 00:13:48.242 "name": "malloc0" 00:13:48.242 }, 00:13:48.242 "method": "bdev_malloc_create" 00:13:48.242 }, 00:13:48.242 { 00:13:48.242 "params": { 00:13:48.242 "io_mechanism": "libaio", 00:13:48.242 "filename": "/dev/nullb0", 00:13:48.242 "name": "null0" 00:13:48.242 }, 00:13:48.242 "method": "bdev_xnvme_create" 00:13:48.242 }, 00:13:48.242 { 00:13:48.242 "method": "bdev_wait_for_examine" 00:13:48.242 } 00:13:48.242 ] 00:13:48.242 } 00:13:48.242 ] 00:13:48.242 } 00:13:48.242 [2024-10-01 15:13:46.645021] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:13:48.242 [2024-10-01 15:13:46.645155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81890 ] 00:13:48.499 [2024-10-01 15:13:46.812358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.499 [2024-10-01 15:13:46.862624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.500  Copying: 246/1024 [MB] (246 MBps) Copying: 494/1024 [MB] (247 MBps) Copying: 739/1024 [MB] (245 MBps) Copying: 986/1024 [MB] (246 MBps) Copying: 1024/1024 [MB] (average 247 MBps) 00:13:53.500 00:13:53.500 15:13:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:53.500 15:13:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:53.500 15:13:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:53.500 15:13:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:53.500 { 00:13:53.500 "subsystems": [ 00:13:53.500 { 00:13:53.500 "subsystem": "bdev", 00:13:53.500 "config": [ 00:13:53.500 { 00:13:53.500 "params": { 00:13:53.500 "block_size": 512, 00:13:53.500 "num_blocks": 2097152, 00:13:53.500 "name": "malloc0" 00:13:53.500 }, 00:13:53.500 "method": "bdev_malloc_create" 00:13:53.500 }, 00:13:53.500 { 00:13:53.500 "params": { 00:13:53.500 "io_mechanism": "libaio", 00:13:53.500 "filename": "/dev/nullb0", 00:13:53.500 "name": "null0" 00:13:53.500 }, 00:13:53.500 "method": "bdev_xnvme_create" 00:13:53.500 }, 00:13:53.500 { 00:13:53.500 "method": "bdev_wait_for_examine" 00:13:53.500 } 00:13:53.500 ] 00:13:53.500 } 00:13:53.500 ] 00:13:53.500 } 00:13:53.500 [2024-10-01 15:13:52.030353] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:13:53.500 [2024-10-01 15:13:52.030574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81955 ] 00:13:53.756 [2024-10-01 15:13:52.203802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.756 [2024-10-01 15:13:52.254647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.627  Copying: 254/1024 [MB] (254 MBps) Copying: 509/1024 [MB] (255 MBps) Copying: 764/1024 [MB] (254 MBps) Copying: 1019/1024 [MB] (255 MBps) Copying: 1024/1024 [MB] (average 255 MBps) 00:13:58.627 00:13:58.627 15:13:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:58.627 15:13:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:58.627 15:13:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:58.627 15:13:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:58.627 15:13:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:58.627 15:13:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:58.886 { 00:13:58.886 "subsystems": [ 00:13:58.886 { 00:13:58.886 "subsystem": "bdev", 00:13:58.886 "config": [ 00:13:58.886 { 00:13:58.886 "params": { 00:13:58.886 "block_size": 512, 00:13:58.886 "num_blocks": 2097152, 00:13:58.886 "name": "malloc0" 00:13:58.886 }, 00:13:58.886 "method": "bdev_malloc_create" 00:13:58.886 }, 00:13:58.886 { 00:13:58.886 "params": { 00:13:58.886 "io_mechanism": "io_uring", 00:13:58.886 "filename": "/dev/nullb0", 00:13:58.886 "name": "null0" 00:13:58.886 }, 00:13:58.886 "method": "bdev_xnvme_create" 00:13:58.886 }, 00:13:58.886 { 00:13:58.886 "method": "bdev_wait_for_examine" 00:13:58.886 } 00:13:58.886 ] 00:13:58.886 } 00:13:58.886 ] 00:13:58.886 } 00:13:58.886 [2024-10-01 15:13:57.262077] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:13:58.886 [2024-10-01 15:13:57.262242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82020 ] 00:13:58.886 [2024-10-01 15:13:57.429797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.144 [2024-10-01 15:13:57.475133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.898  Copying: 263/1024 [MB] (263 MBps) Copying: 526/1024 [MB] (263 MBps) Copying: 789/1024 [MB] (262 MBps) Copying: 1024/1024 [MB] (average 264 MBps) 00:14:03.898 00:14:03.898 15:14:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:03.898 15:14:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:03.898 15:14:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:03.898 15:14:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:03.898 { 00:14:03.898 "subsystems": [ 00:14:03.898 { 00:14:03.898 "subsystem": "bdev", 00:14:03.898 "config": [ 00:14:03.898 { 00:14:03.898 "params": { 00:14:03.898 "block_size": 512, 00:14:03.898 "num_blocks": 2097152, 00:14:03.898 "name": "malloc0" 00:14:03.898 }, 00:14:03.898 "method": "bdev_malloc_create" 00:14:03.898 }, 00:14:03.898 { 00:14:03.898 "params": { 00:14:03.898 "io_mechanism": "io_uring", 00:14:03.898 "filename": "/dev/nullb0", 00:14:03.898 "name": "null0" 00:14:03.898 }, 00:14:03.898 "method": "bdev_xnvme_create" 00:14:03.898 }, 00:14:03.898 { 00:14:03.898 "method": "bdev_wait_for_examine" 00:14:03.898 } 00:14:03.898 ] 00:14:03.898 } 00:14:03.898 ] 00:14:03.898 } 00:14:03.898 [2024-10-01 15:14:02.297257] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:14:03.898 [2024-10-01 15:14:02.297387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82088 ] 00:14:04.157 [2024-10-01 15:14:02.465676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.157 [2024-10-01 15:14:02.512962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.666  Copying: 267/1024 [MB] (267 MBps) Copying: 536/1024 [MB] (268 MBps) Copying: 803/1024 [MB] (267 MBps) Copying: 1024/1024 [MB] (average 268 MBps) 00:14:08.666 00:14:08.666 15:14:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:08.666 15:14:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:08.924 00:14:08.924 real 0m20.697s 00:14:08.924 user 0m16.077s 00:14:08.924 sys 0m4.184s 00:14:08.924 15:14:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.924 15:14:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:08.925 ************************************ 00:14:08.925 END TEST xnvme_to_malloc_dd_copy 00:14:08.925 ************************************ 00:14:08.925 15:14:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:08.925 15:14:07 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:08.925 15:14:07 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.925 15:14:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.925 ************************************ 00:14:08.925 START TEST xnvme_bdevperf 00:14:08.925 ************************************ 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:08.925 15:14:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:08.925 { 00:14:08.925 "subsystems": [ 00:14:08.925 { 00:14:08.925 "subsystem": "bdev", 00:14:08.925 "config": [ 00:14:08.925 { 00:14:08.925 "params": { 00:14:08.925 "io_mechanism": "libaio", 00:14:08.925 "filename": "/dev/nullb0", 00:14:08.925 "name": "null0" 00:14:08.925 }, 00:14:08.925 "method": "bdev_xnvme_create" 00:14:08.925 }, 00:14:08.925 { 00:14:08.925 "method": "bdev_wait_for_examine" 00:14:08.925 } 00:14:08.925 ] 00:14:08.925 } 00:14:08.925 ] 00:14:08.925 } 00:14:08.925 [2024-10-01 15:14:07.421260] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:14:08.925 [2024-10-01 15:14:07.421406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82171 ] 00:14:09.183 [2024-10-01 15:14:07.589617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.183 [2024-10-01 15:14:07.639544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.442 Running I/O for 5 seconds... 00:14:14.592 151808.00 IOPS, 593.00 MiB/s 151872.00 IOPS, 593.25 MiB/s 151082.67 IOPS, 590.17 MiB/s 151360.00 IOPS, 591.25 MiB/s 00:14:14.592 Latency(us) 00:14:14.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.592 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:14.592 null0 : 5.00 151649.28 592.38 0.00 0.00 419.62 134.07 1895.02 00:14:14.592 =================================================================================================================== 00:14:14.592 Total : 151649.28 592.38 0.00 0.00 419.62 134.07 1895.02 00:14:14.592 15:14:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:14.592 15:14:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:14.592 15:14:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:14.592 15:14:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:14.592 15:14:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:14.592 15:14:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:14.592 { 00:14:14.592 "subsystems": [ 00:14:14.592 { 00:14:14.592 "subsystem": "bdev", 00:14:14.592 "config": [ 00:14:14.592 { 00:14:14.592 "params": { 00:14:14.592 "io_mechanism": "io_uring", 00:14:14.592 "filename": "/dev/nullb0", 00:14:14.592 "name": "null0" 00:14:14.592 }, 00:14:14.592 "method": "bdev_xnvme_create" 00:14:14.592 }, 00:14:14.592 { 00:14:14.592 "method": "bdev_wait_for_examine" 00:14:14.592 } 00:14:14.592 ] 00:14:14.592 } 00:14:14.592 ] 00:14:14.592 } 00:14:14.592 [2024-10-01 15:14:13.066829] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:14:14.592 [2024-10-01 15:14:13.067370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82234 ] 00:14:14.850 [2024-10-01 15:14:13.237456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.850 [2024-10-01 15:14:13.287900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.850 Running I/O for 5 seconds... 00:14:19.966 199808.00 IOPS, 780.50 MiB/s 196800.00 IOPS, 768.75 MiB/s 193834.67 IOPS, 757.17 MiB/s 193008.00 IOPS, 753.94 MiB/s 192934.40 IOPS, 753.65 MiB/s 00:14:19.966 Latency(us) 00:14:19.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.966 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:19.966 null0 : 5.00 192856.40 753.35 0.00 0.00 329.41 289.52 1881.86 00:14:19.966 =================================================================================================================== 00:14:19.966 Total : 192856.40 753.35 0.00 0.00 329.41 289.52 1881.86 00:14:20.239 15:14:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:20.239 15:14:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:20.239 00:14:20.239 real 0m11.362s 00:14:20.239 user 0m7.852s 00:14:20.239 sys 0m3.297s 00:14:20.239 15:14:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.239 15:14:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:20.239 ************************************ 00:14:20.239 END TEST xnvme_bdevperf 00:14:20.239 ************************************ 00:14:20.239 ************************************ 00:14:20.239 END TEST nvme_xnvme 00:14:20.239 ************************************ 00:14:20.239 00:14:20.239 real 0m32.444s 00:14:20.239 user 0m24.113s 00:14:20.239 sys 0m7.692s 00:14:20.239 15:14:18 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.239 15:14:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.239 15:14:18 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:20.239 15:14:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:20.239 15:14:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.239 15:14:18 -- common/autotest_common.sh@10 -- # set +x 00:14:20.239 ************************************ 00:14:20.239 START TEST blockdev_xnvme 00:14:20.239 ************************************ 00:14:20.239 15:14:18 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:20.498 * Looking for test storage... 00:14:20.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:20.498 15:14:18 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:20.498 15:14:18 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:14:20.498 15:14:18 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:20.498 15:14:18 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.498 15:14:18 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:20.498 15:14:19 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.498 15:14:19 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.498 15:14:19 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.498 15:14:19 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:20.498 15:14:19 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.498 15:14:19 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:20.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.498 --rc genhtml_branch_coverage=1 00:14:20.498 --rc genhtml_function_coverage=1 00:14:20.498 --rc genhtml_legend=1 00:14:20.498 --rc geninfo_all_blocks=1 00:14:20.498 --rc geninfo_unexecuted_blocks=1 00:14:20.498 00:14:20.498 ' 00:14:20.498 15:14:19 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:20.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.498 --rc genhtml_branch_coverage=1 00:14:20.498 --rc genhtml_function_coverage=1 00:14:20.498 --rc genhtml_legend=1 00:14:20.498 --rc geninfo_all_blocks=1 00:14:20.498 --rc geninfo_unexecuted_blocks=1 00:14:20.498 00:14:20.498 ' 00:14:20.498 15:14:19 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:20.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.498 --rc genhtml_branch_coverage=1 00:14:20.498 --rc genhtml_function_coverage=1 00:14:20.498 --rc genhtml_legend=1 00:14:20.498 --rc geninfo_all_blocks=1 00:14:20.498 --rc geninfo_unexecuted_blocks=1 00:14:20.499 00:14:20.499 ' 00:14:20.499 15:14:19 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:20.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.499 --rc genhtml_branch_coverage=1 00:14:20.499 --rc genhtml_function_coverage=1 00:14:20.499 --rc genhtml_legend=1 00:14:20.499 --rc geninfo_all_blocks=1 00:14:20.499 --rc geninfo_unexecuted_blocks=1 00:14:20.499 00:14:20.499 ' 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=82376 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:20.499 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 82376 00:14:20.499 15:14:19 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 82376 ']' 00:14:20.499 15:14:19 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.499 15:14:19 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.499 15:14:19 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.499 15:14:19 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.499 15:14:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.761 [2024-10-01 15:14:19.145930] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:14:20.761 [2024-10-01 15:14:19.146303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82376 ] 00:14:21.022 [2024-10-01 15:14:19.320070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.022 [2024-10-01 15:14:19.371433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.588 15:14:19 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.588 15:14:19 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:14:21.588 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:21.588 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:21.588 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:21.588 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:21.588 15:14:19 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:22.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:22.414 Waiting for block devices as requested 00:14:22.414 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.671 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.671 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.929 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:28.351 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:28.351 nvme0n1 00:14:28.351 nvme1n1 00:14:28.351 nvme2n1 00:14:28.351 nvme2n2 00:14:28.351 nvme2n3 00:14:28.351 nvme3n1 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.351 15:14:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:28.351 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:28.352 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c7f1531a-bd29-4c12-a833-f62d3475f943"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c7f1531a-bd29-4c12-a833-f62d3475f943",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b8c7af91-e110-4e11-9c9a-788e74cd6e1b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b8c7af91-e110-4e11-9c9a-788e74cd6e1b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "999e26ac-22ac-4f68-b06f-8338c8cb189d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "999e26ac-22ac-4f68-b06f-8338c8cb189d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "920aa845-d701-4447-96eb-2f552301b1e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "920aa845-d701-4447-96eb-2f552301b1e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "274efb42-19f2-45d7-bf7d-09364e98aa3d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "274efb42-19f2-45d7-bf7d-09364e98aa3d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b87718e5-9a75-4476-bcfc-c899c7cdda68"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b87718e5-9a75-4476-bcfc-c899c7cdda68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:28.352 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:28.352 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:28.352 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:28.352 15:14:26 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 82376 00:14:28.352 15:14:26 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 82376 ']' 00:14:28.352 15:14:26 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 82376 00:14:28.352 15:14:26 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:14:28.352 15:14:26 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:28.352 15:14:26 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82376 00:14:28.352 15:14:26 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:28.352 15:14:26 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:28.352 15:14:26 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82376' 00:14:28.352 killing process with pid 82376 00:14:28.352 15:14:26 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 82376 00:14:28.352 15:14:26 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 82376 00:14:28.610 15:14:27 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:28.610 15:14:27 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:28.610 15:14:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:28.610 15:14:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:28.610 15:14:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.610 ************************************ 00:14:28.610 START TEST bdev_hello_world 00:14:28.610 ************************************ 00:14:28.610 15:14:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:28.867 [2024-10-01 15:14:27.192986] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:14:28.867 [2024-10-01 15:14:27.193360] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82731 ] 00:14:28.867 [2024-10-01 15:14:27.363960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.126 [2024-10-01 15:14:27.417077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.126 [2024-10-01 15:14:27.599810] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:29.126 [2024-10-01 15:14:27.599866] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:29.126 [2024-10-01 15:14:27.599896] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:29.126 [2024-10-01 15:14:27.602370] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:29.126 [2024-10-01 15:14:27.602884] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:29.126 [2024-10-01 15:14:27.602916] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:29.126 [2024-10-01 15:14:27.603142] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:29.126 00:14:29.126 [2024-10-01 15:14:27.603181] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:29.384 00:14:29.384 real 0m0.750s 00:14:29.384 user 0m0.408s 00:14:29.384 sys 0m0.231s 00:14:29.384 15:14:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:29.384 ************************************ 00:14:29.384 END TEST bdev_hello_world 00:14:29.384 ************************************ 00:14:29.384 15:14:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:29.384 15:14:27 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:29.384 15:14:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:29.384 15:14:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:29.384 15:14:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.384 ************************************ 00:14:29.384 START TEST bdev_bounds 00:14:29.384 ************************************ 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:14:29.384 Process bdevio pid: 82762 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=82762 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 82762' 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 82762 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 82762 ']' 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:29.384 15:14:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:29.645 [2024-10-01 15:14:28.015521] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:14:29.645 [2024-10-01 15:14:28.016326] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82762 ] 00:14:29.645 [2024-10-01 15:14:28.188960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:29.903 [2024-10-01 15:14:28.245155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.903 [2024-10-01 15:14:28.245267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.903 [2024-10-01 15:14:28.245280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.467 15:14:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:30.467 15:14:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:14:30.467 15:14:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:30.724 I/O targets: 00:14:30.724 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:30.724 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:30.724 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:30.724 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:30.724 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:30.724 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:30.724 00:14:30.724 00:14:30.724 CUnit - A unit testing framework for C - Version 2.1-3 00:14:30.724 http://cunit.sourceforge.net/ 00:14:30.724 00:14:30.724 00:14:30.724 Suite: bdevio tests on: nvme3n1 00:14:30.724 Test: blockdev write read block ...passed 00:14:30.724 Test: blockdev write zeroes read block ...passed 00:14:30.724 Test: blockdev write zeroes read no split ...passed 00:14:30.724 Test: blockdev write zeroes read split ...passed 00:14:30.724 Test: blockdev write zeroes read split partial ...passed 00:14:30.724 Test: blockdev reset ...passed 00:14:30.724 Test: blockdev write read 8 blocks ...passed 00:14:30.724 Test: blockdev write read size > 128k ...passed 00:14:30.724 Test: blockdev write read invalid size ...passed 00:14:30.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:30.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:30.724 Test: blockdev write read max offset ...passed 00:14:30.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:30.724 Test: blockdev writev readv 8 blocks ...passed 00:14:30.724 Test: blockdev writev readv 30 x 1block ...passed 00:14:30.724 Test: blockdev writev readv block ...passed 00:14:30.724 Test: blockdev writev readv size > 128k ...passed 00:14:30.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:30.724 Test: blockdev comparev and writev ...passed 00:14:30.724 Test: blockdev nvme passthru rw ...passed 00:14:30.724 Test: blockdev nvme passthru vendor specific ...passed 00:14:30.724 Test: blockdev nvme admin passthru ...passed 00:14:30.724 Test: blockdev copy ...passed 00:14:30.724 Suite: bdevio tests on: nvme2n3 00:14:30.724 Test: blockdev write read block ...passed 00:14:30.724 Test: blockdev write zeroes read block ...passed 00:14:30.724 Test: blockdev write zeroes read no split ...passed 00:14:30.724 Test: blockdev write zeroes read split ...passed 00:14:30.724 Test: blockdev write zeroes read split partial ...passed 00:14:30.724 Test: blockdev reset ...passed 00:14:30.724 Test: blockdev write read 8 blocks ...passed 00:14:30.724 Test: blockdev write read size > 128k ...passed 00:14:30.724 Test: blockdev write read invalid size ...passed 00:14:30.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:30.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:30.724 Test: blockdev write read max offset ...passed 00:14:30.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:30.724 Test: blockdev writev readv 8 blocks ...passed 00:14:30.724 Test: blockdev writev readv 30 x 1block ...passed 00:14:30.724 Test: blockdev writev readv block ...passed 00:14:30.724 Test: blockdev writev readv size > 128k ...passed 00:14:30.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:30.724 Test: blockdev comparev and writev ...passed 00:14:30.724 Test: blockdev nvme passthru rw ...passed 00:14:30.724 Test: blockdev nvme passthru vendor specific ...passed 00:14:30.724 Test: blockdev nvme admin passthru ...passed 00:14:30.724 Test: blockdev copy ...passed 00:14:30.724 Suite: bdevio tests on: nvme2n2 00:14:30.724 Test: blockdev write read block ...passed 00:14:30.724 Test: blockdev write zeroes read block ...passed 00:14:30.724 Test: blockdev write zeroes read no split ...passed 00:14:30.724 Test: blockdev write zeroes read split ...passed 00:14:30.724 Test: blockdev write zeroes read split partial ...passed 00:14:30.724 Test: blockdev reset ...passed 00:14:30.724 Test: blockdev write read 8 blocks ...passed 00:14:30.724 Test: blockdev write read size > 128k ...passed 00:14:30.724 Test: blockdev write read invalid size ...passed 00:14:30.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:30.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:30.724 Test: blockdev write read max offset ...passed 00:14:30.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:30.724 Test: blockdev writev readv 8 blocks ...passed 00:14:30.724 Test: blockdev writev readv 30 x 1block ...passed 00:14:30.724 Test: blockdev writev readv block ...passed 00:14:30.724 Test: blockdev writev readv size > 128k ...passed 00:14:30.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:30.724 Test: blockdev comparev and writev ...passed 00:14:30.724 Test: blockdev nvme passthru rw ...passed 00:14:30.724 Test: blockdev nvme passthru vendor specific ...passed 00:14:30.724 Test: blockdev nvme admin passthru ...passed 00:14:30.724 Test: blockdev copy ...passed 00:14:30.724 Suite: bdevio tests on: nvme2n1 00:14:30.724 Test: blockdev write read block ...passed 00:14:30.724 Test: blockdev write zeroes read block ...passed 00:14:30.724 Test: blockdev write zeroes read no split ...passed 00:14:30.724 Test: blockdev write zeroes read split ...passed 00:14:30.724 Test: blockdev write zeroes read split partial ...passed 00:14:30.724 Test: blockdev reset ...passed 00:14:30.724 Test: blockdev write read 8 blocks ...passed 00:14:30.724 Test: blockdev write read size > 128k ...passed 00:14:30.724 Test: blockdev write read invalid size ...passed 00:14:30.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:30.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:30.724 Test: blockdev write read max offset ...passed 00:14:30.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:30.724 Test: blockdev writev readv 8 blocks ...passed 00:14:30.724 Test: blockdev writev readv 30 x 1block ...passed 00:14:30.724 Test: blockdev writev readv block ...passed 00:14:30.724 Test: blockdev writev readv size > 128k ...passed 00:14:30.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:30.724 Test: blockdev comparev and writev ...passed 00:14:30.724 Test: blockdev nvme passthru rw ...passed 00:14:30.724 Test: blockdev nvme passthru vendor specific ...passed 00:14:30.724 Test: blockdev nvme admin passthru ...passed 00:14:30.724 Test: blockdev copy ...passed 00:14:30.724 Suite: bdevio tests on: nvme1n1 00:14:30.724 Test: blockdev write read block ...passed 00:14:30.724 Test: blockdev write zeroes read block ...passed 00:14:30.724 Test: blockdev write zeroes read no split ...passed 00:14:30.724 Test: blockdev write zeroes read split ...passed 00:14:30.724 Test: blockdev write zeroes read split partial ...passed 00:14:30.724 Test: blockdev reset ...passed 00:14:30.724 Test: blockdev write read 8 blocks ...passed 00:14:30.724 Test: blockdev write read size > 128k ...passed 00:14:30.724 Test: blockdev write read invalid size ...passed 00:14:30.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:30.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:30.724 Test: blockdev write read max offset ...passed 00:14:30.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:30.724 Test: blockdev writev readv 8 blocks ...passed 00:14:30.724 Test: blockdev writev readv 30 x 1block ...passed 00:14:30.724 Test: blockdev writev readv block ...passed 00:14:30.724 Test: blockdev writev readv size > 128k ...passed 00:14:30.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:30.724 Test: blockdev comparev and writev ...passed 00:14:30.724 Test: blockdev nvme passthru rw ...passed 00:14:30.724 Test: blockdev nvme passthru vendor specific ...passed 00:14:30.724 Test: blockdev nvme admin passthru ...passed 00:14:30.724 Test: blockdev copy ...passed 00:14:30.724 Suite: bdevio tests on: nvme0n1 00:14:30.724 Test: blockdev write read block ...passed 00:14:30.724 Test: blockdev write zeroes read block ...passed 00:14:30.724 Test: blockdev write zeroes read no split ...passed 00:14:30.724 Test: blockdev write zeroes read split ...passed 00:14:30.724 Test: blockdev write zeroes read split partial ...passed 00:14:30.724 Test: blockdev reset ...passed 00:14:30.724 Test: blockdev write read 8 blocks ...passed 00:14:30.724 Test: blockdev write read size > 128k ...passed 00:14:30.724 Test: blockdev write read invalid size ...passed 00:14:30.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:30.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:30.724 Test: blockdev write read max offset ...passed 00:14:30.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:30.724 Test: blockdev writev readv 8 blocks ...passed 00:14:30.724 Test: blockdev writev readv 30 x 1block ...passed 00:14:30.724 Test: blockdev writev readv block ...passed 00:14:30.724 Test: blockdev writev readv size > 128k ...passed 00:14:30.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:30.724 Test: blockdev comparev and writev ...passed 00:14:30.724 Test: blockdev nvme passthru rw ...passed 00:14:30.724 Test: blockdev nvme passthru vendor specific ...passed 00:14:30.724 Test: blockdev nvme admin passthru ...passed 00:14:30.724 Test: blockdev copy ...passed 00:14:30.724 00:14:30.724 Run Summary: Type Total Ran Passed Failed Inactive 00:14:30.724 suites 6 6 n/a 0 0 00:14:30.724 tests 138 138 138 0 0 00:14:30.724 asserts 780 780 780 0 n/a 00:14:30.724 00:14:30.724 Elapsed time = 0.377 seconds 00:14:30.724 0 00:14:30.724 15:14:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 82762 00:14:30.724 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 82762 ']' 00:14:30.724 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 82762 00:14:30.724 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:14:30.981 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.981 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82762 00:14:30.981 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.981 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.981 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82762' 00:14:30.981 killing process with pid 82762 00:14:30.981 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 82762 00:14:30.981 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 82762 00:14:31.238 15:14:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:31.238 00:14:31.238 real 0m1.624s 00:14:31.238 user 0m3.992s 00:14:31.238 sys 0m0.408s 00:14:31.238 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.238 15:14:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:31.238 ************************************ 00:14:31.238 END TEST bdev_bounds 00:14:31.238 ************************************ 00:14:31.239 15:14:29 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:31.239 15:14:29 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:31.239 15:14:29 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.239 15:14:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.239 ************************************ 00:14:31.239 START TEST bdev_nbd 00:14:31.239 ************************************ 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=82807 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 82807 /var/tmp/spdk-nbd.sock 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 82807 ']' 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:31.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.239 15:14:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:31.239 [2024-10-01 15:14:29.724974] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:14:31.239 [2024-10-01 15:14:29.725134] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.497 [2024-10-01 15:14:29.885310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.497 [2024-10-01 15:14:29.937184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:32.062 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:32.320 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:32.320 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:32.320 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:32.320 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.321 1+0 records in 00:14:32.321 1+0 records out 00:14:32.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507718 s, 8.1 MB/s 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:32.321 15:14:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:32.579 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:32.579 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.837 1+0 records in 00:14:32.837 1+0 records out 00:14:32.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000997131 s, 4.1 MB/s 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:32.837 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.096 1+0 records in 00:14:33.096 1+0 records out 00:14:33.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635031 s, 6.5 MB/s 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:33.096 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.355 1+0 records in 00:14:33.355 1+0 records out 00:14:33.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636149 s, 6.4 MB/s 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:33.355 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.614 1+0 records in 00:14:33.614 1+0 records out 00:14:33.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553258 s, 7.4 MB/s 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:33.614 15:14:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.872 1+0 records in 00:14:33.872 1+0 records out 00:14:33.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686095 s, 6.0 MB/s 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:33.872 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:34.129 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd0", 00:14:34.129 "bdev_name": "nvme0n1" 00:14:34.129 }, 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd1", 00:14:34.129 "bdev_name": "nvme1n1" 00:14:34.129 }, 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd2", 00:14:34.129 "bdev_name": "nvme2n1" 00:14:34.129 }, 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd3", 00:14:34.129 "bdev_name": "nvme2n2" 00:14:34.129 }, 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd4", 00:14:34.129 "bdev_name": "nvme2n3" 00:14:34.129 }, 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd5", 00:14:34.129 "bdev_name": "nvme3n1" 00:14:34.129 } 00:14:34.129 ]' 00:14:34.129 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:34.129 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:34.129 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd0", 00:14:34.129 "bdev_name": "nvme0n1" 00:14:34.129 }, 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd1", 00:14:34.129 "bdev_name": "nvme1n1" 00:14:34.129 }, 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd2", 00:14:34.129 "bdev_name": "nvme2n1" 00:14:34.129 }, 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd3", 00:14:34.129 "bdev_name": "nvme2n2" 00:14:34.129 }, 00:14:34.129 { 00:14:34.129 "nbd_device": "/dev/nbd4", 00:14:34.130 "bdev_name": "nvme2n3" 00:14:34.130 }, 00:14:34.130 { 00:14:34.130 "nbd_device": "/dev/nbd5", 00:14:34.130 "bdev_name": "nvme3n1" 00:14:34.130 } 00:14:34.130 ]' 00:14:34.130 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:34.130 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.130 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:34.130 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:34.130 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:34.130 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.130 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:34.388 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:34.388 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:34.388 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:34.388 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.388 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.388 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:34.388 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:34.388 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.388 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.388 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:34.646 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:34.646 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:34.646 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:34.646 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.646 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.646 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:34.646 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:34.646 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.646 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.646 15:14:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.904 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:35.163 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:35.163 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:35.163 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:35.163 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.163 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.163 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:35.163 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:35.163 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.163 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.163 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.422 15:14:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:35.681 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:35.940 /dev/nbd0 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.940 1+0 records in 00:14:35.940 1+0 records out 00:14:35.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057684 s, 7.1 MB/s 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:35.940 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:36.199 /dev/nbd1 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.199 1+0 records in 00:14:36.199 1+0 records out 00:14:36.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494684 s, 8.3 MB/s 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:36.199 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:36.458 /dev/nbd10 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.458 1+0 records in 00:14:36.458 1+0 records out 00:14:36.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794616 s, 5.2 MB/s 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:36.458 15:14:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:36.717 /dev/nbd11 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.717 1+0 records in 00:14:36.717 1+0 records out 00:14:36.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667553 s, 6.1 MB/s 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:36.717 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:36.975 /dev/nbd12 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.975 1+0 records in 00:14:36.975 1+0 records out 00:14:36.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504372 s, 8.1 MB/s 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:36.975 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:37.233 /dev/nbd13 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.233 1+0 records in 00:14:37.233 1+0 records out 00:14:37.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000851888 s, 4.8 MB/s 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:37.233 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:37.492 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd0", 00:14:37.492 "bdev_name": "nvme0n1" 00:14:37.492 }, 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd1", 00:14:37.492 "bdev_name": "nvme1n1" 00:14:37.492 }, 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd10", 00:14:37.492 "bdev_name": "nvme2n1" 00:14:37.492 }, 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd11", 00:14:37.492 "bdev_name": "nvme2n2" 00:14:37.492 }, 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd12", 00:14:37.492 "bdev_name": "nvme2n3" 00:14:37.492 }, 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd13", 00:14:37.492 "bdev_name": "nvme3n1" 00:14:37.492 } 00:14:37.492 ]' 00:14:37.492 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd0", 00:14:37.492 "bdev_name": "nvme0n1" 00:14:37.492 }, 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd1", 00:14:37.492 "bdev_name": "nvme1n1" 00:14:37.492 }, 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd10", 00:14:37.492 "bdev_name": "nvme2n1" 00:14:37.492 }, 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd11", 00:14:37.492 "bdev_name": "nvme2n2" 00:14:37.492 }, 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd12", 00:14:37.492 "bdev_name": "nvme2n3" 00:14:37.492 }, 00:14:37.492 { 00:14:37.492 "nbd_device": "/dev/nbd13", 00:14:37.492 "bdev_name": "nvme3n1" 00:14:37.492 } 00:14:37.492 ]' 00:14:37.492 15:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:37.492 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:37.492 /dev/nbd1 00:14:37.492 /dev/nbd10 00:14:37.492 /dev/nbd11 00:14:37.492 /dev/nbd12 00:14:37.492 /dev/nbd13' 00:14:37.492 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:37.492 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:37.492 /dev/nbd1 00:14:37.492 /dev/nbd10 00:14:37.492 /dev/nbd11 00:14:37.492 /dev/nbd12 00:14:37.492 /dev/nbd13' 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:37.493 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:37.751 256+0 records in 00:14:37.751 256+0 records out 00:14:37.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115549 s, 90.7 MB/s 00:14:37.751 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:37.751 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:37.751 256+0 records in 00:14:37.751 256+0 records out 00:14:37.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110664 s, 9.5 MB/s 00:14:37.751 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:37.751 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:38.011 256+0 records in 00:14:38.011 256+0 records out 00:14:38.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137107 s, 7.6 MB/s 00:14:38.011 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:38.011 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:38.011 256+0 records in 00:14:38.011 256+0 records out 00:14:38.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119649 s, 8.8 MB/s 00:14:38.011 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:38.011 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:38.011 256+0 records in 00:14:38.011 256+0 records out 00:14:38.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114925 s, 9.1 MB/s 00:14:38.011 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:38.011 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:38.270 256+0 records in 00:14:38.270 256+0 records out 00:14:38.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119488 s, 8.8 MB/s 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:38.270 256+0 records in 00:14:38.270 256+0 records out 00:14:38.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118567 s, 8.8 MB/s 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.270 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:38.528 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.529 15:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:38.787 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:38.787 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:38.787 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:38.787 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.787 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.787 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:38.787 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:38.787 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.787 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.787 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.046 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:39.305 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:39.305 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:39.305 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:39.305 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.305 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.305 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:39.305 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.305 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.305 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.305 15:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:39.564 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:39.564 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:39.564 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:39.564 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.564 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.564 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:39.564 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.564 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.564 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.564 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:39.824 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:40.083 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:40.342 malloc_lvol_verify 00:14:40.342 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:40.600 c1d9c17d-e7af-46cf-bd72-6238c7645067 00:14:40.600 15:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:40.859 8dba0f9d-f30b-4115-969e-aaa7516799f9 00:14:40.859 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:41.118 /dev/nbd0 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:41.118 mke2fs 1.47.0 (5-Feb-2023) 00:14:41.118 Discarding device blocks: 0/4096 done 00:14:41.118 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:41.118 00:14:41.118 Allocating group tables: 0/1 done 00:14:41.118 Writing inode tables: 0/1 done 00:14:41.118 Creating journal (1024 blocks): done 00:14:41.118 Writing superblocks and filesystem accounting information: 0/1 done 00:14:41.118 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.118 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 82807 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 82807 ']' 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 82807 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82807 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:41.377 killing process with pid 82807 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82807' 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 82807 00:14:41.377 15:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 82807 00:14:41.636 15:14:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:41.636 00:14:41.636 real 0m10.399s 00:14:41.636 user 0m13.848s 00:14:41.636 sys 0m4.774s 00:14:41.636 15:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.636 15:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:41.636 ************************************ 00:14:41.636 END TEST bdev_nbd 00:14:41.636 ************************************ 00:14:41.636 15:14:40 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:41.636 15:14:40 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:41.636 15:14:40 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:41.636 15:14:40 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:41.636 15:14:40 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:41.636 15:14:40 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.636 15:14:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:41.636 ************************************ 00:14:41.636 START TEST bdev_fio 00:14:41.636 ************************************ 00:14:41.636 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:14:41.636 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:41.636 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:41.636 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:41.636 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.637 15:14:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:41.896 ************************************ 00:14:41.896 START TEST bdev_fio_rw_verify 00:14:41.896 ************************************ 00:14:41.896 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:41.897 15:14:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:41.897 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:41.897 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:41.897 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:41.897 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:41.897 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:41.897 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:41.897 fio-3.35 00:14:41.897 Starting 6 threads 00:14:54.104 00:14:54.104 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83208: Tue Oct 1 15:14:50 2024 00:14:54.104 read: IOPS=33.6k, BW=131MiB/s (137MB/s)(1311MiB/10001msec) 00:14:54.104 slat (usec): min=2, max=1545, avg= 5.80, stdev= 5.65 00:14:54.104 clat (usec): min=96, max=6013, avg=558.98, stdev=216.82 00:14:54.104 lat (usec): min=109, max=6020, avg=564.77, stdev=217.58 00:14:54.104 clat percentiles (usec): 00:14:54.104 | 50.000th=[ 570], 99.000th=[ 1172], 99.900th=[ 2057], 99.990th=[ 3851], 00:14:54.104 | 99.999th=[ 5997] 00:14:54.104 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(1328MiB/10001msec); 0 zone resets 00:14:54.104 slat (usec): min=11, max=2202, avg=22.77, stdev=29.70 00:14:54.104 clat (usec): min=80, max=6063, avg=629.99, stdev=238.51 00:14:54.104 lat (usec): min=94, max=6081, avg=652.76, stdev=243.35 00:14:54.104 clat percentiles (usec): 00:14:54.104 | 50.000th=[ 619], 99.000th=[ 1401], 99.900th=[ 2245], 99.990th=[ 3851], 00:14:54.104 | 99.999th=[ 5997] 00:14:54.104 bw ( KiB/s): min=103240, max=158804, per=99.88%, avg=135848.68, stdev=2454.18, samples=114 00:14:54.104 iops : min=25810, max=39700, avg=33961.84, stdev=613.51, samples=114 00:14:54.104 lat (usec) : 100=0.01%, 250=4.87%, 500=26.47%, 750=52.05%, 1000=12.80% 00:14:54.104 lat (msec) : 2=3.67%, 4=0.14%, 10=0.01% 00:14:54.104 cpu : usr=55.77%, sys=30.52%, ctx=8482, majf=0, minf=27954 00:14:54.104 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:54.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.104 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.104 issued rwts: total=335675,340073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.104 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:54.104 00:14:54.104 Run status group 0 (all jobs): 00:14:54.104 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=1311MiB (1375MB), run=10001-10001msec 00:14:54.104 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=1328MiB (1393MB), run=10001-10001msec 00:14:54.104 ----------------------------------------------------- 00:14:54.104 Suppressions used: 00:14:54.105 count bytes template 00:14:54.105 6 48 /usr/src/fio/parse.c 00:14:54.105 4150 398400 /usr/src/fio/iolog.c 00:14:54.105 1 8 libtcmalloc_minimal.so 00:14:54.105 1 904 libcrypto.so 00:14:54.105 ----------------------------------------------------- 00:14:54.105 00:14:54.105 00:14:54.105 real 0m11.263s 00:14:54.105 user 0m34.251s 00:14:54.105 sys 0m18.715s 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:54.105 ************************************ 00:14:54.105 END TEST bdev_fio_rw_verify 00:14:54.105 ************************************ 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c7f1531a-bd29-4c12-a833-f62d3475f943"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c7f1531a-bd29-4c12-a833-f62d3475f943",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b8c7af91-e110-4e11-9c9a-788e74cd6e1b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b8c7af91-e110-4e11-9c9a-788e74cd6e1b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "999e26ac-22ac-4f68-b06f-8338c8cb189d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "999e26ac-22ac-4f68-b06f-8338c8cb189d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "920aa845-d701-4447-96eb-2f552301b1e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "920aa845-d701-4447-96eb-2f552301b1e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "274efb42-19f2-45d7-bf7d-09364e98aa3d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "274efb42-19f2-45d7-bf7d-09364e98aa3d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b87718e5-9a75-4476-bcfc-c899c7cdda68"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b87718e5-9a75-4476-bcfc-c899c7cdda68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:14:54.105 /home/vagrant/spdk_repo/spdk 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:14:54.105 00:14:54.105 real 0m11.495s 00:14:54.105 user 0m34.367s 00:14:54.105 sys 0m18.833s 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:54.105 15:14:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:54.105 ************************************ 00:14:54.105 END TEST bdev_fio 00:14:54.105 ************************************ 00:14:54.105 15:14:51 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:54.105 15:14:51 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:54.105 15:14:51 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:54.105 15:14:51 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:54.105 15:14:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.105 ************************************ 00:14:54.105 START TEST bdev_verify 00:14:54.105 ************************************ 00:14:54.105 15:14:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:54.105 [2024-10-01 15:14:51.749477] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:14:54.105 [2024-10-01 15:14:51.749619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83383 ] 00:14:54.105 [2024-10-01 15:14:51.921407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:54.105 [2024-10-01 15:14:51.972196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.105 [2024-10-01 15:14:51.972325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.105 Running I/O for 5 seconds... 00:14:58.793 21984.00 IOPS, 85.88 MiB/s 23840.00 IOPS, 93.12 MiB/s 23680.00 IOPS, 92.50 MiB/s 23520.00 IOPS, 91.88 MiB/s 23596.80 IOPS, 92.17 MiB/s 00:14:58.793 Latency(us) 00:14:58.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.793 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0x0 length 0xa0000 00:14:58.794 nvme0n1 : 5.05 1799.62 7.03 0.00 0.00 71002.02 9001.33 66536.15 00:14:58.794 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0xa0000 length 0xa0000 00:14:58.794 nvme0n1 : 5.06 1772.31 6.92 0.00 0.00 72093.71 8317.02 64851.69 00:14:58.794 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0x0 length 0xbd0bd 00:14:58.794 nvme1n1 : 5.06 2726.55 10.65 0.00 0.00 46695.09 5316.58 57692.74 00:14:58.794 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:58.794 nvme1n1 : 5.05 2749.00 10.74 0.00 0.00 46337.99 6027.21 57271.62 00:14:58.794 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0x0 length 0x80000 00:14:58.794 nvme2n1 : 5.05 1799.12 7.03 0.00 0.00 70571.35 9738.28 60219.42 00:14:58.794 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0x80000 length 0x80000 00:14:58.794 nvme2n1 : 5.05 1772.97 6.93 0.00 0.00 71683.34 8159.10 68220.61 00:14:58.794 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0x0 length 0x80000 00:14:58.794 nvme2n2 : 5.06 1795.92 7.02 0.00 0.00 70542.35 13896.79 59377.20 00:14:58.794 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0x80000 length 0x80000 00:14:58.794 nvme2n2 : 5.06 1771.88 6.92 0.00 0.00 71573.50 8369.66 62746.11 00:14:58.794 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0x0 length 0x80000 00:14:58.794 nvme2n3 : 5.07 1816.12 7.09 0.00 0.00 69636.70 4605.94 63167.23 00:14:58.794 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0x80000 length 0x80000 00:14:58.794 nvme2n3 : 5.06 1771.45 6.92 0.00 0.00 71464.43 8738.13 67799.49 00:14:58.794 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0x0 length 0x20000 00:14:58.794 nvme3n1 : 5.08 1815.45 7.09 0.00 0.00 69563.78 4079.55 68220.61 00:14:58.794 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:58.794 Verification LBA range: start 0x20000 length 0x20000 00:14:58.794 nvme3n1 : 5.07 1791.55 7.00 0.00 0.00 70549.04 2921.48 73695.10 00:14:58.794 =================================================================================================================== 00:14:58.794 Total : 23381.94 91.34 0.00 0.00 65163.92 2921.48 73695.10 00:14:59.054 00:14:59.054 real 0m5.863s 00:14:59.054 user 0m8.585s 00:14:59.054 sys 0m2.090s 00:14:59.054 ************************************ 00:14:59.054 END TEST bdev_verify 00:14:59.054 ************************************ 00:14:59.054 15:14:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.054 15:14:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:59.055 15:14:57 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:59.055 15:14:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:59.055 15:14:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.055 15:14:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.055 ************************************ 00:14:59.055 START TEST bdev_verify_big_io 00:14:59.055 ************************************ 00:14:59.055 15:14:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:59.313 [2024-10-01 15:14:57.683887] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:14:59.313 [2024-10-01 15:14:57.684017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83471 ] 00:14:59.313 [2024-10-01 15:14:57.853095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:59.573 [2024-10-01 15:14:57.906292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.573 [2024-10-01 15:14:57.906389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.833 Running I/O for 5 seconds... 00:15:05.433 2267.00 IOPS, 141.69 MiB/s 3194.50 IOPS, 199.66 MiB/s 3780.33 IOPS, 236.27 MiB/s 00:15:05.433 Latency(us) 00:15:05.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.433 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0x0 length 0xa000 00:15:05.433 nvme0n1 : 5.70 143.21 8.95 0.00 0.00 872293.50 139810.13 1778789.17 00:15:05.433 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0xa000 length 0xa000 00:15:05.433 nvme0n1 : 5.70 157.14 9.82 0.00 0.00 798762.30 74537.33 1280189.17 00:15:05.433 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0x0 length 0xbd0b 00:15:05.433 nvme1n1 : 5.70 151.52 9.47 0.00 0.00 806489.00 12159.69 1637294.57 00:15:05.433 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:05.433 nvme1n1 : 5.71 237.22 14.83 0.00 0.00 509041.06 11633.30 636725.67 00:15:05.433 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0x0 length 0x8000 00:15:05.433 nvme2n1 : 5.69 180.08 11.25 0.00 0.00 664397.75 11580.66 1192597.28 00:15:05.433 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0x8000 length 0x8000 00:15:05.433 nvme2n1 : 5.71 201.88 12.62 0.00 0.00 591240.08 11159.54 1051102.69 00:15:05.433 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0x0 length 0x8000 00:15:05.433 nvme2n2 : 5.71 134.60 8.41 0.00 0.00 866796.40 126334.46 1536227.01 00:15:05.433 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0x8000 length 0x8000 00:15:05.433 nvme2n2 : 5.71 154.68 9.67 0.00 0.00 753004.85 12844.00 1152170.26 00:15:05.433 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0x0 length 0x8000 00:15:05.433 nvme2n3 : 5.72 159.51 9.97 0.00 0.00 722740.83 9369.81 1307140.52 00:15:05.433 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0x8000 length 0x8000 00:15:05.433 nvme2n3 : 5.72 151.30 9.46 0.00 0.00 753864.91 19160.73 1414945.93 00:15:05.433 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0x0 length 0x2000 00:15:05.433 nvme3n1 : 5.71 200.45 12.53 0.00 0.00 560490.32 15581.25 1590129.71 00:15:05.433 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:05.433 Verification LBA range: start 0x2000 length 0x2000 00:15:05.433 nvme3n1 : 5.72 153.80 9.61 0.00 0.00 724170.35 10475.23 1273451.33 00:15:05.433 =================================================================================================================== 00:15:05.433 Total : 2025.40 126.59 0.00 0.00 700377.04 9369.81 1778789.17 00:15:05.691 00:15:05.691 real 0m6.551s 00:15:05.691 user 0m11.709s 00:15:05.691 sys 0m0.656s 00:15:05.691 15:15:04 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:05.691 15:15:04 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.691 ************************************ 00:15:05.691 END TEST bdev_verify_big_io 00:15:05.691 ************************************ 00:15:05.691 15:15:04 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:05.691 15:15:04 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:05.691 15:15:04 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:05.691 15:15:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.691 ************************************ 00:15:05.691 START TEST bdev_write_zeroes 00:15:05.691 ************************************ 00:15:05.691 15:15:04 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:05.950 [2024-10-01 15:15:04.309877] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:15:05.950 [2024-10-01 15:15:04.310220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83565 ] 00:15:05.950 [2024-10-01 15:15:04.481248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.208 [2024-10-01 15:15:04.531725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.208 Running I/O for 1 seconds... 00:15:07.585 56704.00 IOPS, 221.50 MiB/s 00:15:07.585 Latency(us) 00:15:07.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.585 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:07.585 nvme0n1 : 1.03 9039.02 35.31 0.00 0.00 14148.68 8211.74 27161.91 00:15:07.585 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:07.585 nvme1n1 : 1.03 11163.69 43.61 0.00 0.00 11447.50 5079.70 20424.07 00:15:07.585 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:07.585 nvme2n1 : 1.03 9029.76 35.27 0.00 0.00 14075.59 8317.02 26951.35 00:15:07.585 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:07.585 nvme2n2 : 1.04 9021.07 35.24 0.00 0.00 14077.40 8264.38 27583.02 00:15:07.585 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:07.585 nvme2n3 : 1.04 9012.28 35.20 0.00 0.00 14083.44 8264.38 27583.02 00:15:07.585 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:07.585 nvme3n1 : 1.04 9003.55 35.17 0.00 0.00 14088.86 8211.74 27793.58 00:15:07.585 =================================================================================================================== 00:15:07.585 Total : 56269.36 219.80 0.00 0.00 13571.15 5079.70 27793.58 00:15:07.585 00:15:07.585 real 0m1.804s 00:15:07.585 user 0m0.986s 00:15:07.585 sys 0m0.631s 00:15:07.585 15:15:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.585 15:15:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:07.585 ************************************ 00:15:07.585 END TEST bdev_write_zeroes 00:15:07.585 ************************************ 00:15:07.585 15:15:06 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:07.585 15:15:06 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:07.585 15:15:06 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.585 15:15:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.585 ************************************ 00:15:07.585 START TEST bdev_json_nonenclosed 00:15:07.585 ************************************ 00:15:07.585 15:15:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:07.844 [2024-10-01 15:15:06.196038] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:15:07.844 [2024-10-01 15:15:06.196259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83608 ] 00:15:07.844 [2024-10-01 15:15:06.391511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.104 [2024-10-01 15:15:06.442387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.104 [2024-10-01 15:15:06.442508] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:08.104 [2024-10-01 15:15:06.442535] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:08.104 [2024-10-01 15:15:06.442550] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:08.104 00:15:08.104 real 0m0.476s 00:15:08.104 user 0m0.207s 00:15:08.104 sys 0m0.164s 00:15:08.104 15:15:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.104 15:15:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:08.104 ************************************ 00:15:08.104 END TEST bdev_json_nonenclosed 00:15:08.104 ************************************ 00:15:08.104 15:15:06 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:08.104 15:15:06 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:08.104 15:15:06 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.104 15:15:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.104 ************************************ 00:15:08.104 START TEST bdev_json_nonarray 00:15:08.104 ************************************ 00:15:08.104 15:15:06 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:08.362 [2024-10-01 15:15:06.734530] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:15:08.363 [2024-10-01 15:15:06.734677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83628 ] 00:15:08.363 [2024-10-01 15:15:06.901803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.621 [2024-10-01 15:15:06.950416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.621 [2024-10-01 15:15:06.950549] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:08.621 [2024-10-01 15:15:06.950583] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:08.621 [2024-10-01 15:15:06.950631] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:08.621 00:15:08.621 real 0m0.429s 00:15:08.621 user 0m0.177s 00:15:08.621 sys 0m0.148s 00:15:08.621 15:15:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.621 ************************************ 00:15:08.621 END TEST bdev_json_nonarray 00:15:08.621 ************************************ 00:15:08.621 15:15:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:08.621 15:15:07 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:09.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:24.456 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.456 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.456 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.456 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.456 00:15:24.456 real 1m3.042s 00:15:24.456 user 1m23.925s 00:15:24.456 sys 1m7.050s 00:15:24.456 15:15:21 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.456 15:15:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.456 ************************************ 00:15:24.456 END TEST blockdev_xnvme 00:15:24.456 ************************************ 00:15:24.456 15:15:21 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:24.456 15:15:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:24.456 15:15:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.456 15:15:21 -- common/autotest_common.sh@10 -- # set +x 00:15:24.456 ************************************ 00:15:24.456 START TEST ublk 00:15:24.456 ************************************ 00:15:24.456 15:15:21 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:24.456 * Looking for test storage... 00:15:24.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:24.456 15:15:22 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.456 15:15:22 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.456 15:15:22 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.456 15:15:22 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.456 15:15:22 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.456 15:15:22 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.456 15:15:22 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.456 15:15:22 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.456 15:15:22 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.456 15:15:22 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.456 15:15:22 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.456 15:15:22 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:24.456 15:15:22 ublk -- scripts/common.sh@345 -- # : 1 00:15:24.456 15:15:22 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.456 15:15:22 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.456 15:15:22 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:24.456 15:15:22 ublk -- scripts/common.sh@353 -- # local d=1 00:15:24.456 15:15:22 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.456 15:15:22 ublk -- scripts/common.sh@355 -- # echo 1 00:15:24.456 15:15:22 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.456 15:15:22 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:24.456 15:15:22 ublk -- scripts/common.sh@353 -- # local d=2 00:15:24.456 15:15:22 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.456 15:15:22 ublk -- scripts/common.sh@355 -- # echo 2 00:15:24.456 15:15:22 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.456 15:15:22 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.456 15:15:22 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.456 15:15:22 ublk -- scripts/common.sh@368 -- # return 0 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:24.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.456 --rc genhtml_branch_coverage=1 00:15:24.456 --rc genhtml_function_coverage=1 00:15:24.456 --rc genhtml_legend=1 00:15:24.456 --rc geninfo_all_blocks=1 00:15:24.456 --rc geninfo_unexecuted_blocks=1 00:15:24.456 00:15:24.456 ' 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:24.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.456 --rc genhtml_branch_coverage=1 00:15:24.456 --rc genhtml_function_coverage=1 00:15:24.456 --rc genhtml_legend=1 00:15:24.456 --rc geninfo_all_blocks=1 00:15:24.456 --rc geninfo_unexecuted_blocks=1 00:15:24.456 00:15:24.456 ' 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:24.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.456 --rc genhtml_branch_coverage=1 00:15:24.456 --rc genhtml_function_coverage=1 00:15:24.456 --rc genhtml_legend=1 00:15:24.456 --rc geninfo_all_blocks=1 00:15:24.456 --rc geninfo_unexecuted_blocks=1 00:15:24.456 00:15:24.456 ' 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:24.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.456 --rc genhtml_branch_coverage=1 00:15:24.456 --rc genhtml_function_coverage=1 00:15:24.456 --rc genhtml_legend=1 00:15:24.456 --rc geninfo_all_blocks=1 00:15:24.456 --rc geninfo_unexecuted_blocks=1 00:15:24.456 00:15:24.456 ' 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:24.456 15:15:22 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:24.456 15:15:22 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:24.456 15:15:22 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:24.456 15:15:22 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:24.456 15:15:22 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:24.456 15:15:22 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:24.456 15:15:22 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:24.456 15:15:22 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:24.456 15:15:22 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.456 15:15:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:24.456 ************************************ 00:15:24.456 START TEST test_save_ublk_config 00:15:24.456 ************************************ 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=83931 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 83931 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83931 ']' 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.456 15:15:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:24.456 [2024-10-01 15:15:22.281375] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:15:24.456 [2024-10-01 15:15:22.281696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83931 ] 00:15:24.456 [2024-10-01 15:15:22.450380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.456 [2024-10-01 15:15:22.508517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.715 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:24.715 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:24.715 15:15:23 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:24.715 15:15:23 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:24.715 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.715 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:24.715 [2024-10-01 15:15:23.178194] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:24.715 [2024-10-01 15:15:23.178513] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:24.715 malloc0 00:15:24.715 [2024-10-01 15:15:23.210315] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:24.715 [2024-10-01 15:15:23.210414] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:24.715 [2024-10-01 15:15:23.210425] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:24.715 [2024-10-01 15:15:23.210439] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:24.715 [2024-10-01 15:15:23.219301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:24.715 [2024-10-01 15:15:23.219335] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:24.715 [2024-10-01 15:15:23.226208] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:24.715 [2024-10-01 15:15:23.226322] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:24.715 [2024-10-01 15:15:23.243202] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:24.715 0 00:15:24.715 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.715 15:15:23 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:24.715 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.715 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:25.283 "subsystems": [ 00:15:25.283 { 00:15:25.283 "subsystem": "fsdev", 00:15:25.283 "config": [ 00:15:25.283 { 00:15:25.283 "method": "fsdev_set_opts", 00:15:25.283 "params": { 00:15:25.283 "fsdev_io_pool_size": 65535, 00:15:25.283 "fsdev_io_cache_size": 256 00:15:25.283 } 00:15:25.283 } 00:15:25.283 ] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "keyring", 00:15:25.283 "config": [] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "iobuf", 00:15:25.283 "config": [ 00:15:25.283 { 00:15:25.283 "method": "iobuf_set_options", 00:15:25.283 "params": { 00:15:25.283 "small_pool_count": 8192, 00:15:25.283 "large_pool_count": 1024, 00:15:25.283 "small_bufsize": 8192, 00:15:25.283 "large_bufsize": 135168 00:15:25.283 } 00:15:25.283 } 00:15:25.283 ] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "sock", 00:15:25.283 "config": [ 00:15:25.283 { 00:15:25.283 "method": "sock_set_default_impl", 00:15:25.283 "params": { 00:15:25.283 "impl_name": "posix" 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "sock_impl_set_options", 00:15:25.283 "params": { 00:15:25.283 "impl_name": "ssl", 00:15:25.283 "recv_buf_size": 4096, 00:15:25.283 "send_buf_size": 4096, 00:15:25.283 "enable_recv_pipe": true, 00:15:25.283 "enable_quickack": false, 00:15:25.283 "enable_placement_id": 0, 00:15:25.283 "enable_zerocopy_send_server": true, 00:15:25.283 "enable_zerocopy_send_client": false, 00:15:25.283 "zerocopy_threshold": 0, 00:15:25.283 "tls_version": 0, 00:15:25.283 "enable_ktls": false 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "sock_impl_set_options", 00:15:25.283 "params": { 00:15:25.283 "impl_name": "posix", 00:15:25.283 "recv_buf_size": 2097152, 00:15:25.283 "send_buf_size": 2097152, 00:15:25.283 "enable_recv_pipe": true, 00:15:25.283 "enable_quickack": false, 00:15:25.283 "enable_placement_id": 0, 00:15:25.283 "enable_zerocopy_send_server": true, 00:15:25.283 "enable_zerocopy_send_client": false, 00:15:25.283 "zerocopy_threshold": 0, 00:15:25.283 "tls_version": 0, 00:15:25.283 "enable_ktls": false 00:15:25.283 } 00:15:25.283 } 00:15:25.283 ] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "vmd", 00:15:25.283 "config": [] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "accel", 00:15:25.283 "config": [ 00:15:25.283 { 00:15:25.283 "method": "accel_set_options", 00:15:25.283 "params": { 00:15:25.283 "small_cache_size": 128, 00:15:25.283 "large_cache_size": 16, 00:15:25.283 "task_count": 2048, 00:15:25.283 "sequence_count": 2048, 00:15:25.283 "buf_count": 2048 00:15:25.283 } 00:15:25.283 } 00:15:25.283 ] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "bdev", 00:15:25.283 "config": [ 00:15:25.283 { 00:15:25.283 "method": "bdev_set_options", 00:15:25.283 "params": { 00:15:25.283 "bdev_io_pool_size": 65535, 00:15:25.283 "bdev_io_cache_size": 256, 00:15:25.283 "bdev_auto_examine": true, 00:15:25.283 "iobuf_small_cache_size": 128, 00:15:25.283 "iobuf_large_cache_size": 16 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "bdev_raid_set_options", 00:15:25.283 "params": { 00:15:25.283 "process_window_size_kb": 1024, 00:15:25.283 "process_max_bandwidth_mb_sec": 0 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "bdev_iscsi_set_options", 00:15:25.283 "params": { 00:15:25.283 "timeout_sec": 30 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "bdev_nvme_set_options", 00:15:25.283 "params": { 00:15:25.283 "action_on_timeout": "none", 00:15:25.283 "timeout_us": 0, 00:15:25.283 "timeout_admin_us": 0, 00:15:25.283 "keep_alive_timeout_ms": 10000, 00:15:25.283 "arbitration_burst": 0, 00:15:25.283 "low_priority_weight": 0, 00:15:25.283 "medium_priority_weight": 0, 00:15:25.283 "high_priority_weight": 0, 00:15:25.283 "nvme_adminq_poll_period_us": 10000, 00:15:25.283 "nvme_ioq_poll_period_us": 0, 00:15:25.283 "io_queue_requests": 0, 00:15:25.283 "delay_cmd_submit": true, 00:15:25.283 "transport_retry_count": 4, 00:15:25.283 "bdev_retry_count": 3, 00:15:25.283 "transport_ack_timeout": 0, 00:15:25.283 "ctrlr_loss_timeout_sec": 0, 00:15:25.283 "reconnect_delay_sec": 0, 00:15:25.283 "fast_io_fail_timeout_sec": 0, 00:15:25.283 "disable_auto_failback": false, 00:15:25.283 "generate_uuids": false, 00:15:25.283 "transport_tos": 0, 00:15:25.283 "nvme_error_stat": false, 00:15:25.283 "rdma_srq_size": 0, 00:15:25.283 "io_path_stat": false, 00:15:25.283 "allow_accel_sequence": false, 00:15:25.283 "rdma_max_cq_size": 0, 00:15:25.283 "rdma_cm_event_timeout_ms": 0, 00:15:25.283 "dhchap_digests": [ 00:15:25.283 "sha256", 00:15:25.283 "sha384", 00:15:25.283 "sha512" 00:15:25.283 ], 00:15:25.283 "dhchap_dhgroups": [ 00:15:25.283 "null", 00:15:25.283 "ffdhe2048", 00:15:25.283 "ffdhe3072", 00:15:25.283 "ffdhe4096", 00:15:25.283 "ffdhe6144", 00:15:25.283 "ffdhe8192" 00:15:25.283 ] 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "bdev_nvme_set_hotplug", 00:15:25.283 "params": { 00:15:25.283 "period_us": 100000, 00:15:25.283 "enable": false 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "bdev_malloc_create", 00:15:25.283 "params": { 00:15:25.283 "name": "malloc0", 00:15:25.283 "num_blocks": 8192, 00:15:25.283 "block_size": 4096, 00:15:25.283 "physical_block_size": 4096, 00:15:25.283 "uuid": "ca9686ed-12b5-491d-9ffe-b32d88dc4f9f", 00:15:25.283 "optimal_io_boundary": 0, 00:15:25.283 "md_size": 0, 00:15:25.283 "dif_type": 0, 00:15:25.283 "dif_is_head_of_md": false, 00:15:25.283 "dif_pi_format": 0 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "bdev_wait_for_examine" 00:15:25.283 } 00:15:25.283 ] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "scsi", 00:15:25.283 "config": null 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "scheduler", 00:15:25.283 "config": [ 00:15:25.283 { 00:15:25.283 "method": "framework_set_scheduler", 00:15:25.283 "params": { 00:15:25.283 "name": "static" 00:15:25.283 } 00:15:25.283 } 00:15:25.283 ] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "vhost_scsi", 00:15:25.283 "config": [] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "vhost_blk", 00:15:25.283 "config": [] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "ublk", 00:15:25.283 "config": [ 00:15:25.283 { 00:15:25.283 "method": "ublk_create_target", 00:15:25.283 "params": { 00:15:25.283 "cpumask": "1" 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "ublk_start_disk", 00:15:25.283 "params": { 00:15:25.283 "bdev_name": "malloc0", 00:15:25.283 "ublk_id": 0, 00:15:25.283 "num_queues": 1, 00:15:25.283 "queue_depth": 128 00:15:25.283 } 00:15:25.283 } 00:15:25.283 ] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "nbd", 00:15:25.283 "config": [] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "nvmf", 00:15:25.283 "config": [ 00:15:25.283 { 00:15:25.283 "method": "nvmf_set_config", 00:15:25.283 "params": { 00:15:25.283 "discovery_filter": "match_any", 00:15:25.283 "admin_cmd_passthru": { 00:15:25.283 "identify_ctrlr": false 00:15:25.283 }, 00:15:25.283 "dhchap_digests": [ 00:15:25.283 "sha256", 00:15:25.283 "sha384", 00:15:25.283 "sha512" 00:15:25.283 ], 00:15:25.283 "dhchap_dhgroups": [ 00:15:25.283 "null", 00:15:25.283 "ffdhe2048", 00:15:25.283 "ffdhe3072", 00:15:25.283 "ffdhe4096", 00:15:25.283 "ffdhe6144", 00:15:25.283 "ffdhe8192" 00:15:25.283 ] 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "nvmf_set_max_subsystems", 00:15:25.283 "params": { 00:15:25.283 "max_subsystems": 1024 00:15:25.283 } 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "method": "nvmf_set_crdt", 00:15:25.283 "params": { 00:15:25.283 "crdt1": 0, 00:15:25.283 "crdt2": 0, 00:15:25.283 "crdt3": 0 00:15:25.283 } 00:15:25.283 } 00:15:25.283 ] 00:15:25.283 }, 00:15:25.283 { 00:15:25.283 "subsystem": "iscsi", 00:15:25.283 "config": [ 00:15:25.283 { 00:15:25.283 "method": "iscsi_set_options", 00:15:25.283 "params": { 00:15:25.283 "node_base": "iqn.2016-06.io.spdk", 00:15:25.283 "max_sessions": 128, 00:15:25.283 "max_connections_per_session": 2, 00:15:25.283 "max_queue_depth": 64, 00:15:25.283 "default_time2wait": 2, 00:15:25.283 "default_time2retain": 20, 00:15:25.283 "first_burst_length": 8192, 00:15:25.283 "immediate_data": true, 00:15:25.283 "allow_duplicated_isid": false, 00:15:25.283 "error_recovery_level": 0, 00:15:25.283 "nop_timeout": 60, 00:15:25.283 "nop_in_interval": 30, 00:15:25.283 "disable_chap": false, 00:15:25.283 "require_chap": false, 00:15:25.283 "mutual_chap": false, 00:15:25.283 "chap_group": 0, 00:15:25.283 "max_large_datain_per_connection": 64, 00:15:25.283 "max_r2t_per_connection": 4, 00:15:25.283 "pdu_pool_size": 36864, 00:15:25.283 "immediate_data_pool_size": 16384, 00:15:25.283 "data_out_pool_size": 2048 00:15:25.283 } 00:15:25.283 } 00:15:25.283 ] 00:15:25.283 } 00:15:25.283 ] 00:15:25.283 }' 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 83931 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83931 ']' 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83931 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83931 00:15:25.283 killing process with pid 83931 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83931' 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83931 00:15:25.283 15:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83931 00:15:25.542 [2024-10-01 15:15:23.896408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:25.542 [2024-10-01 15:15:23.937290] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:25.542 [2024-10-01 15:15:23.937434] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:25.542 [2024-10-01 15:15:23.943210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:25.542 [2024-10-01 15:15:23.943273] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:25.542 [2024-10-01 15:15:23.943283] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:25.542 [2024-10-01 15:15:23.943313] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:25.542 [2024-10-01 15:15:23.943447] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:26.108 15:15:24 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=83969 00:15:26.108 15:15:24 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 83969 00:15:26.108 15:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83969 ']' 00:15:26.109 15:15:24 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:26.109 15:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.109 15:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.109 15:15:24 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:26.109 "subsystems": [ 00:15:26.109 { 00:15:26.109 "subsystem": "fsdev", 00:15:26.109 "config": [ 00:15:26.109 { 00:15:26.109 "method": "fsdev_set_opts", 00:15:26.109 "params": { 00:15:26.109 "fsdev_io_pool_size": 65535, 00:15:26.109 "fsdev_io_cache_size": 256 00:15:26.109 } 00:15:26.109 } 00:15:26.109 ] 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "subsystem": "keyring", 00:15:26.109 "config": [] 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "subsystem": "iobuf", 00:15:26.109 "config": [ 00:15:26.109 { 00:15:26.109 "method": "iobuf_set_options", 00:15:26.109 "params": { 00:15:26.109 "small_pool_count": 8192, 00:15:26.109 "large_pool_count": 1024, 00:15:26.109 "small_bufsize": 8192, 00:15:26.109 "large_bufsize": 135168 00:15:26.109 } 00:15:26.109 } 00:15:26.109 ] 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "subsystem": "sock", 00:15:26.109 "config": [ 00:15:26.109 { 00:15:26.109 "method": "sock_set_default_impl", 00:15:26.109 "params": { 00:15:26.109 "impl_name": "posix" 00:15:26.109 } 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "method": "sock_impl_set_options", 00:15:26.109 "params": { 00:15:26.109 "impl_name": "ssl", 00:15:26.109 "recv_buf_size": 4096, 00:15:26.109 "send_buf_size": 4096, 00:15:26.109 "enable_recv_pipe": true, 00:15:26.109 "enable_quickack": false, 00:15:26.109 "enable_placement_id": 0, 00:15:26.109 "enable_zerocopy_send_server": true, 00:15:26.109 "enable_zerocopy_send_client": false, 00:15:26.109 "zerocopy_threshold": 0, 00:15:26.109 "tls_version": 0, 00:15:26.109 "enable_ktls": false 00:15:26.109 } 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "method": "sock_impl_set_options", 00:15:26.109 "params": { 00:15:26.109 "impl_name": "posix", 00:15:26.109 "recv_buf_size": 2097152, 00:15:26.109 "send_buf_size": 2097152, 00:15:26.109 "enable_recv_pipe": true, 00:15:26.109 "enable_quickack": false, 00:15:26.109 "enable_placement_id": 0, 00:15:26.109 "enable_zerocopy_send_server": true, 00:15:26.109 "enable_zerocopy_send_client": false, 00:15:26.109 "zerocopy_threshold": 0, 00:15:26.109 "tls_version": 0, 00:15:26.109 "enable_ktls": false 00:15:26.109 } 00:15:26.109 } 00:15:26.109 ] 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "subsystem": "vmd", 00:15:26.109 "config": [] 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "subsystem": "accel", 00:15:26.109 "config": [ 00:15:26.109 { 00:15:26.109 "method": "accel_set_options", 00:15:26.109 "params": { 00:15:26.109 "small_cache_size": 128, 00:15:26.109 "large_cache_size": 16, 00:15:26.109 "task_count": 2048, 00:15:26.109 "sequence_count": 2048, 00:15:26.109 "buf_count": 2048 00:15:26.109 } 00:15:26.109 } 00:15:26.109 ] 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "subsystem": "bdev", 00:15:26.109 "config": [ 00:15:26.109 { 00:15:26.109 "method": "bdev_set_options", 00:15:26.109 "params": { 00:15:26.109 "bdev_io_pool_size": 65535, 00:15:26.109 "bdev_io_cache_size": 256, 00:15:26.109 "bdev_auto_examine": true, 00:15:26.109 "iobuf_small_cache_size": 128, 00:15:26.109 "iobuf_large_cache_size": 16 00:15:26.109 } 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "method": "bdev_raid_set_options", 00:15:26.109 "params": { 00:15:26.109 "process_window_size_kb": 1024, 00:15:26.109 "process_max_bandwidth_mb_sec": 0 00:15:26.109 } 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "method": "bdev_iscsi_set_options", 00:15:26.109 "params": { 00:15:26.109 "timeout_sec": 30 00:15:26.109 } 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "method": "bdev_nvme_set_options", 00:15:26.109 "params": { 00:15:26.109 "action_on_timeout": "none", 00:15:26.109 "timeout_us": 0, 00:15:26.109 "timeout_admin_us": 0, 00:15:26.109 "keep_alive_timeout_ms": 10000, 00:15:26.109 "arbitration_burst": 0, 00:15:26.109 "low_priority_weight": 0, 00:15:26.109 "medium_priority_weight": 0, 00:15:26.109 "high_priority_weight": 0, 00:15:26.109 "nvme_adminq_poll_period_us": 10000, 00:15:26.109 "nvme_ioq_poll_period_us": 0, 00:15:26.109 "io_queue_requests": 0, 00:15:26.109 "delay_cmd_submit": true, 00:15:26.109 "transport_retry_count": 4, 00:15:26.109 "bdev_retry_count": 3, 00:15:26.109 "transport_ack_timeout": 0, 00:15:26.109 "ctrlr_loss_timeout_sec": 0, 00:15:26.109 "reconnect_delay_sec": 0, 00:15:26.109 "fast_io_fail_timeout_sec": 0, 00:15:26.109 "disable_auto_failback": false, 00:15:26.109 "generate_uuids": false, 00:15:26.109 "transport_tos": 0, 00:15:26.109 "nvme_error_stat": false, 00:15:26.109 "rdma_srq_size": 0, 00:15:26.109 "io_path_stat": false, 00:15:26.109 "allow_accel_sequence": false, 00:15:26.109 "rdma_max_cq_size": 0, 00:15:26.109 "rdma_cm_event_timeout_ms": 0, 00:15:26.109 "dhchap_digests": [ 00:15:26.109 "sha256", 00:15:26.109 "sha384", 00:15:26.109 "sha512" 00:15:26.109 ], 00:15:26.109 "dhchap_dhgroups": [ 00:15:26.109 "null", 00:15:26.109 "ffdhe2048", 00:15:26.109 "ffdhe3072", 00:15:26.109 "ffdhe4096", 00:15:26.109 "ffdhe6144", 00:15:26.109 "ffdhe8192" 00:15:26.109 ] 00:15:26.109 } 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "method": "bdev_nvme_set_hotplug", 00:15:26.109 "params": { 00:15:26.109 "period_us": 100000, 00:15:26.109 "enable": false 00:15:26.109 } 00:15:26.109 }, 00:15:26.109 { 00:15:26.109 "method": "bdev_malloc_create", 00:15:26.109 "params": { 00:15:26.109 "name": "malloc0", 00:15:26.109 "num_blocks": 8192, 00:15:26.109 "block_size": 4096, 00:15:26.109 "physical_block_size": 4096, 00:15:26.109 "uuid": "ca9686ed-12b5-491d-9ffe-b32d88dc4f9f", 00:15:26.109 "optimal_io_boundary": 0, 00:15:26.109 "md_size": 0, 00:15:26.109 "dif_type": 0, 00:15:26.109 "dif_is_head_of_md": false, 00:15:26.109 "dif_pi_format": 0 00:15:26.109 } 00:15:26.109 }, 00:15:26.110 { 00:15:26.110 "method": "bdev_wait_for_examine" 00:15:26.110 } 00:15:26.110 ] 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "subsystem": "scsi", 00:15:26.110 "config": null 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "subsystem": "scheduler", 00:15:26.110 "config": [ 00:15:26.110 { 00:15:26.110 "method": "framework_set_scheduler", 00:15:26.110 "params": { 00:15:26.110 "name": "static" 00:15:26.110 } 00:15:26.110 } 00:15:26.110 ] 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "subsystem": "vhost_scsi", 00:15:26.110 "config": [] 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "subsystem": "vhost_blk", 00:15:26.110 "config": [] 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "subsystem": "ublk", 00:15:26.110 "config": [ 00:15:26.110 { 00:15:26.110 "method": "ublk_create_target", 00:15:26.110 "params": { 00:15:26.110 "cpumask": "1" 00:15:26.110 } 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "method": "ublk_start_disk", 00:15:26.110 "params": { 00:15:26.110 "bdev_name": "malloc0", 00:15:26.110 "ublk_id": 0, 00:15:26.110 "num_queues": 1, 00:15:26.110 "queue_depth": 128 00:15:26.110 } 00:15:26.110 } 00:15:26.110 ] 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "subsystem": "nbd", 00:15:26.110 "config": [] 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "subsystem": "nvmf", 00:15:26.110 "config": [ 00:15:26.110 { 00:15:26.110 "method": "nvmf_set_config", 00:15:26.110 "params": { 00:15:26.110 "discovery_filter": "match_any", 00:15:26.110 "admin_cmd_passthru": { 00:15:26.110 "identify_ctrlr": false 00:15:26.110 }, 00:15:26.110 "dhchap_digests": [ 00:15:26.110 "sha256", 00:15:26.110 "sha384", 00:15:26.110 "sha512" 00:15:26.110 ], 00:15:26.110 "dhchap_dhgroups": [ 00:15:26.110 "null", 00:15:26.110 "ffdhe2048", 00:15:26.110 "ffdhe3072", 00:15:26.110 "ffdhe4096", 00:15:26.110 "ffdhe6144", 00:15:26.110 "ffdhe8192" 00:15:26.110 ] 00:15:26.110 } 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "method": "nvmf_set_max_subsystems", 00:15:26.110 "params": { 00:15:26.110 "max_subsystems": 1024 00:15:26.110 } 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "method": "nvmf_set_crdt", 00:15:26.110 "params": { 00:15:26.110 "crdt1": 0, 00:15:26.110 "crdt2": 0, 00:15:26.110 "crdt3": 0 00:15:26.110 } 00:15:26.110 } 00:15:26.110 ] 00:15:26.110 }, 00:15:26.110 { 00:15:26.110 "subsystem": "iscsi", 00:15:26.110 "config": [ 00:15:26.110 { 00:15:26.110 "method": "iscsi_set_options", 00:15:26.110 "params": { 00:15:26.110 "node_base": "iqn.2016-06.io.spdk", 00:15:26.110 "max_sessions": 128, 00:15:26.110 "max_connections_per_session": 2, 00:15:26.110 "max_queue_depth": 64, 00:15:26.110 "default_time2wait": 2, 00:15:26.110 "default_time2retain": 20, 00:15:26.110 "first_burst_length": 8192, 00:15:26.110 "immediate_data": true, 00:15:26.110 "allow_duplicated_isid": false, 00:15:26.110 "error_recovery_level": 0, 00:15:26.110 "nop_timeout": 60, 00:15:26.110 "nop_in_interval": 30, 00:15:26.110 "disable_chap": false, 00:15:26.110 "require_chap": false, 00:15:26.110 "mutual_chap": false, 00:15:26.110 "chap_group": 0, 00:15:26.110 "max_large_datain_per_connection": 64, 00:15:26.110 "max_r2t_per_connection": 4, 00:15:26.110 "pdu_pool_size": 36864, 00:15:26.110 "immediate_data_pool_size": 16384, 00:15:26.110 "data_out_pool_size": 2048 00:15:26.110 } 00:15:26.110 } 00:15:26.110 ] 00:15:26.110 } 00:15:26.110 ] 00:15:26.110 }' 00:15:26.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.110 15:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.110 15:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.110 15:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:26.110 [2024-10-01 15:15:24.557566] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:15:26.110 [2024-10-01 15:15:24.557896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83969 ] 00:15:26.368 [2024-10-01 15:15:24.728560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.368 [2024-10-01 15:15:24.788287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.936 [2024-10-01 15:15:25.178194] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:26.936 [2024-10-01 15:15:25.178505] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:26.936 [2024-10-01 15:15:25.186333] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:26.936 [2024-10-01 15:15:25.186421] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:26.936 [2024-10-01 15:15:25.186437] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:26.936 [2024-10-01 15:15:25.186453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:26.936 [2024-10-01 15:15:25.194337] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:26.936 [2024-10-01 15:15:25.194365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:26.936 [2024-10-01 15:15:25.202209] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:26.936 [2024-10-01 15:15:25.202316] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:26.936 [2024-10-01 15:15:25.219204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 83969 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83969 ']' 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83969 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.936 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83969 00:15:27.194 killing process with pid 83969 00:15:27.194 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:27.194 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:27.194 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83969' 00:15:27.194 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83969 00:15:27.194 15:15:25 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83969 00:15:27.452 [2024-10-01 15:15:25.764790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:27.452 [2024-10-01 15:15:25.819234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:27.452 [2024-10-01 15:15:25.819382] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:27.452 [2024-10-01 15:15:25.827222] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:27.452 [2024-10-01 15:15:25.827295] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:27.452 [2024-10-01 15:15:25.827308] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:27.452 [2024-10-01 15:15:25.827344] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:27.452 [2024-10-01 15:15:25.827494] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:28.017 15:15:26 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:28.017 00:15:28.017 real 0m4.172s 00:15:28.017 user 0m2.922s 00:15:28.017 sys 0m2.081s 00:15:28.017 15:15:26 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.017 ************************************ 00:15:28.017 END TEST test_save_ublk_config 00:15:28.017 ************************************ 00:15:28.017 15:15:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:28.017 15:15:26 ublk -- ublk/ublk.sh@139 -- # spdk_pid=84025 00:15:28.017 15:15:26 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:28.017 15:15:26 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:28.017 15:15:26 ublk -- ublk/ublk.sh@141 -- # waitforlisten 84025 00:15:28.017 15:15:26 ublk -- common/autotest_common.sh@831 -- # '[' -z 84025 ']' 00:15:28.017 15:15:26 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.017 15:15:26 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.017 15:15:26 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.017 15:15:26 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.017 15:15:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.018 [2024-10-01 15:15:26.490183] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:15:28.018 [2024-10-01 15:15:26.490325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84025 ] 00:15:28.276 [2024-10-01 15:15:26.654016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:28.276 [2024-10-01 15:15:26.699418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.276 [2024-10-01 15:15:26.699526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.843 15:15:27 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.843 15:15:27 ublk -- common/autotest_common.sh@864 -- # return 0 00:15:28.843 15:15:27 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:28.843 15:15:27 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:28.843 15:15:27 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.843 15:15:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.843 ************************************ 00:15:28.843 START TEST test_create_ublk 00:15:28.843 ************************************ 00:15:28.843 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:15:28.843 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:28.843 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.843 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.843 [2024-10-01 15:15:27.346196] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:28.843 [2024-10-01 15:15:27.347610] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:28.843 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.843 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:28.843 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:28.843 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.843 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:29.101 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.101 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 [2024-10-01 15:15:27.427336] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:29.101 [2024-10-01 15:15:27.427790] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:29.101 [2024-10-01 15:15:27.427824] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:29.101 [2024-10-01 15:15:27.427855] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:29.101 [2024-10-01 15:15:27.435555] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:29.101 [2024-10-01 15:15:27.435592] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:29.101 [2024-10-01 15:15:27.443209] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:29.101 [2024-10-01 15:15:27.443781] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:29.101 [2024-10-01 15:15:27.459233] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:29.101 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:29.101 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.101 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.101 15:15:27 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:29.101 { 00:15:29.101 "ublk_device": "/dev/ublkb0", 00:15:29.101 "id": 0, 00:15:29.101 "queue_depth": 512, 00:15:29.101 "num_queues": 4, 00:15:29.101 "bdev_name": "Malloc0" 00:15:29.101 } 00:15:29.101 ]' 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:29.101 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:29.374 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:29.374 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:29.374 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:29.374 15:15:27 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:29.374 15:15:27 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:29.374 fio: verification read phase will never start because write phase uses all of runtime 00:15:29.374 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:29.374 fio-3.35 00:15:29.374 Starting 1 process 00:15:41.575 00:15:41.575 fio_test: (groupid=0, jobs=1): err= 0: pid=84070: Tue Oct 1 15:15:37 2024 00:15:41.576 write: IOPS=15.8k, BW=61.9MiB/s (64.9MB/s)(619MiB/10001msec); 0 zone resets 00:15:41.576 clat (usec): min=37, max=4022, avg=62.28, stdev=102.19 00:15:41.576 lat (usec): min=37, max=4023, avg=62.75, stdev=102.19 00:15:41.576 clat percentiles (usec): 00:15:41.576 | 1.00th=[ 45], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:15:41.576 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:15:41.576 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 65], 95.00th=[ 73], 00:15:41.576 | 99.00th=[ 88], 99.50th=[ 95], 99.90th=[ 2114], 99.95th=[ 2835], 00:15:41.576 | 99.99th=[ 3621] 00:15:41.576 bw ( KiB/s): min=57624, max=66264, per=100.00%, avg=63403.21, stdev=2431.73, samples=19 00:15:41.576 iops : min=14406, max=16566, avg=15850.79, stdev=607.93, samples=19 00:15:41.576 lat (usec) : 50=2.33%, 100=97.27%, 250=0.19%, 500=0.01%, 750=0.01% 00:15:41.576 lat (usec) : 1000=0.02% 00:15:41.576 lat (msec) : 2=0.06%, 4=0.11%, 10=0.01% 00:15:41.576 cpu : usr=2.56%, sys=8.11%, ctx=158497, majf=0, minf=794 00:15:41.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:41.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.576 issued rwts: total=0,158480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:41.576 00:15:41.576 Run status group 0 (all jobs): 00:15:41.576 WRITE: bw=61.9MiB/s (64.9MB/s), 61.9MiB/s-61.9MiB/s (64.9MB/s-64.9MB/s), io=619MiB (649MB), run=10001-10001msec 00:15:41.576 00:15:41.576 Disk stats (read/write): 00:15:41.576 ublkb0: ios=0/156788, merge=0/0, ticks=0/8869, in_queue=8869, util=98.91% 00:15:41.576 15:15:37 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:41.576 15:15:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.576 15:15:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 [2024-10-01 15:15:37.995581] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:41.576 [2024-10-01 15:15:38.038641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:41.576 [2024-10-01 15:15:38.039566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:41.576 [2024-10-01 15:15:38.047223] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:41.576 [2024-10-01 15:15:38.051459] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:41.576 [2024-10-01 15:15:38.051478] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.576 15:15:38 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 [2024-10-01 15:15:38.070333] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:41.576 request: 00:15:41.576 { 00:15:41.576 "ublk_id": 0, 00:15:41.576 "method": "ublk_stop_disk", 00:15:41.576 "req_id": 1 00:15:41.576 } 00:15:41.576 Got JSON-RPC error response 00:15:41.576 response: 00:15:41.576 { 00:15:41.576 "code": -19, 00:15:41.576 "message": "No such device" 00:15:41.576 } 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.576 15:15:38 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 [2024-10-01 15:15:38.093296] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:41.576 [2024-10-01 15:15:38.095417] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:41.576 [2024-10-01 15:15:38.095467] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.576 15:15:38 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.576 15:15:38 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:41.576 15:15:38 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.576 15:15:38 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:41.576 15:15:38 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:41.576 15:15:38 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:41.576 15:15:38 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.576 15:15:38 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:41.576 15:15:38 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:41.576 ************************************ 00:15:41.576 END TEST test_create_ublk 00:15:41.576 ************************************ 00:15:41.576 15:15:38 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:41.576 00:15:41.576 real 0m10.973s 00:15:41.576 user 0m0.662s 00:15:41.576 sys 0m0.918s 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.576 15:15:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 15:15:38 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:41.576 15:15:38 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:41.576 15:15:38 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.576 15:15:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 ************************************ 00:15:41.576 START TEST test_create_multi_ublk 00:15:41.576 ************************************ 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 [2024-10-01 15:15:38.404193] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:41.576 [2024-10-01 15:15:38.405587] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 [2024-10-01 15:15:38.523368] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:41.576 [2024-10-01 15:15:38.523818] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:41.576 [2024-10-01 15:15:38.523839] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:41.576 [2024-10-01 15:15:38.523848] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:41.576 [2024-10-01 15:15:38.535212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:41.576 [2024-10-01 15:15:38.535238] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:41.576 [2024-10-01 15:15:38.547205] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:41.576 [2024-10-01 15:15:38.547820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:41.576 [2024-10-01 15:15:38.559663] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:41.576 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.577 [2024-10-01 15:15:38.675354] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:41.577 [2024-10-01 15:15:38.675813] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:41.577 [2024-10-01 15:15:38.675830] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:41.577 [2024-10-01 15:15:38.675841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:41.577 [2024-10-01 15:15:38.687256] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:41.577 [2024-10-01 15:15:38.687287] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:41.577 [2024-10-01 15:15:38.699212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:41.577 [2024-10-01 15:15:38.699872] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:41.577 [2024-10-01 15:15:38.706261] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.577 [2024-10-01 15:15:38.817361] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:41.577 [2024-10-01 15:15:38.817818] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:41.577 [2024-10-01 15:15:38.817839] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:41.577 [2024-10-01 15:15:38.817848] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:41.577 [2024-10-01 15:15:38.828214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:41.577 [2024-10-01 15:15:38.828243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:41.577 [2024-10-01 15:15:38.840231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:41.577 [2024-10-01 15:15:38.840888] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:41.577 [2024-10-01 15:15:38.843822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.577 15:15:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.577 [2024-10-01 15:15:38.957342] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:41.577 [2024-10-01 15:15:38.957807] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:41.577 [2024-10-01 15:15:38.957824] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:41.577 [2024-10-01 15:15:38.957836] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:41.577 [2024-10-01 15:15:38.969211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:41.577 [2024-10-01 15:15:38.969246] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:41.577 [2024-10-01 15:15:38.981200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:41.577 [2024-10-01 15:15:38.981816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:41.577 [2024-10-01 15:15:39.021219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:41.577 { 00:15:41.577 "ublk_device": "/dev/ublkb0", 00:15:41.577 "id": 0, 00:15:41.577 "queue_depth": 512, 00:15:41.577 "num_queues": 4, 00:15:41.577 "bdev_name": "Malloc0" 00:15:41.577 }, 00:15:41.577 { 00:15:41.577 "ublk_device": "/dev/ublkb1", 00:15:41.577 "id": 1, 00:15:41.577 "queue_depth": 512, 00:15:41.577 "num_queues": 4, 00:15:41.577 "bdev_name": "Malloc1" 00:15:41.577 }, 00:15:41.577 { 00:15:41.577 "ublk_device": "/dev/ublkb2", 00:15:41.577 "id": 2, 00:15:41.577 "queue_depth": 512, 00:15:41.577 "num_queues": 4, 00:15:41.577 "bdev_name": "Malloc2" 00:15:41.577 }, 00:15:41.577 { 00:15:41.577 "ublk_device": "/dev/ublkb3", 00:15:41.577 "id": 3, 00:15:41.577 "queue_depth": 512, 00:15:41.577 "num_queues": 4, 00:15:41.577 "bdev_name": "Malloc3" 00:15:41.577 } 00:15:41.577 ]' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.577 15:15:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.577 [2024-10-01 15:15:39.939324] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:41.577 [2024-10-01 15:15:39.984202] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:41.577 [2024-10-01 15:15:39.985191] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:41.578 [2024-10-01 15:15:39.995208] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:41.578 [2024-10-01 15:15:39.995547] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:41.578 [2024-10-01 15:15:39.995563] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:41.578 15:15:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.578 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.578 15:15:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:41.578 15:15:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.578 15:15:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.578 [2024-10-01 15:15:40.003318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:41.578 [2024-10-01 15:15:40.057278] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:41.578 [2024-10-01 15:15:40.058188] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:41.578 [2024-10-01 15:15:40.065213] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:41.578 [2024-10-01 15:15:40.065540] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:41.578 [2024-10-01 15:15:40.065561] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:41.578 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.578 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.578 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:41.578 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.578 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.578 [2024-10-01 15:15:40.081361] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:41.837 [2024-10-01 15:15:40.127679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:41.837 [2024-10-01 15:15:40.128659] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:41.837 [2024-10-01 15:15:40.136229] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:41.837 [2024-10-01 15:15:40.136559] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:41.837 [2024-10-01 15:15:40.136577] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:41.837 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.837 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:41.837 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:41.837 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.837 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.837 [2024-10-01 15:15:40.152322] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:41.837 [2024-10-01 15:15:40.191646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:41.837 [2024-10-01 15:15:40.193576] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:41.837 [2024-10-01 15:15:40.200217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:41.837 [2024-10-01 15:15:40.200563] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:41.837 [2024-10-01 15:15:40.200581] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:41.837 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.837 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:42.097 [2024-10-01 15:15:40.472364] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:42.097 [2024-10-01 15:15:40.473785] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:42.097 [2024-10-01 15:15:40.473832] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:42.097 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:42.097 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:42.097 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:42.097 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.097 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.097 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.097 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:42.097 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:42.097 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.097 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.356 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.615 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.615 15:15:40 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:42.615 15:15:40 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:42.615 15:15:40 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:42.615 15:15:40 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:42.615 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.615 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.615 15:15:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.615 15:15:40 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:42.615 15:15:40 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:42.615 ************************************ 00:15:42.615 END TEST test_create_multi_ublk 00:15:42.615 ************************************ 00:15:42.615 15:15:41 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:42.615 00:15:42.615 real 0m2.652s 00:15:42.615 user 0m1.154s 00:15:42.615 sys 0m0.216s 00:15:42.615 15:15:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.615 15:15:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.615 15:15:41 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:42.615 15:15:41 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:42.615 15:15:41 ublk -- ublk/ublk.sh@130 -- # killprocess 84025 00:15:42.615 15:15:41 ublk -- common/autotest_common.sh@950 -- # '[' -z 84025 ']' 00:15:42.615 15:15:41 ublk -- common/autotest_common.sh@954 -- # kill -0 84025 00:15:42.615 15:15:41 ublk -- common/autotest_common.sh@955 -- # uname 00:15:42.615 15:15:41 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.615 15:15:41 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84025 00:15:42.615 killing process with pid 84025 00:15:42.615 15:15:41 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:42.615 15:15:41 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:42.615 15:15:41 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84025' 00:15:42.615 15:15:41 ublk -- common/autotest_common.sh@969 -- # kill 84025 00:15:42.615 15:15:41 ublk -- common/autotest_common.sh@974 -- # wait 84025 00:15:42.945 [2024-10-01 15:15:41.311902] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:42.945 [2024-10-01 15:15:41.311980] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:43.204 00:15:43.204 real 0m19.701s 00:15:43.204 user 0m30.182s 00:15:43.204 sys 0m8.379s 00:15:43.204 15:15:41 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:43.204 15:15:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:43.204 ************************************ 00:15:43.204 END TEST ublk 00:15:43.204 ************************************ 00:15:43.204 15:15:41 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:43.204 15:15:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:43.204 15:15:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:43.204 15:15:41 -- common/autotest_common.sh@10 -- # set +x 00:15:43.204 ************************************ 00:15:43.204 START TEST ublk_recovery 00:15:43.204 ************************************ 00:15:43.204 15:15:41 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:43.463 * Looking for test storage... 00:15:43.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.463 15:15:41 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:43.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.463 --rc genhtml_branch_coverage=1 00:15:43.463 --rc genhtml_function_coverage=1 00:15:43.463 --rc genhtml_legend=1 00:15:43.463 --rc geninfo_all_blocks=1 00:15:43.463 --rc geninfo_unexecuted_blocks=1 00:15:43.463 00:15:43.463 ' 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:43.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.463 --rc genhtml_branch_coverage=1 00:15:43.463 --rc genhtml_function_coverage=1 00:15:43.463 --rc genhtml_legend=1 00:15:43.463 --rc geninfo_all_blocks=1 00:15:43.463 --rc geninfo_unexecuted_blocks=1 00:15:43.463 00:15:43.463 ' 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:43.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.463 --rc genhtml_branch_coverage=1 00:15:43.463 --rc genhtml_function_coverage=1 00:15:43.463 --rc genhtml_legend=1 00:15:43.463 --rc geninfo_all_blocks=1 00:15:43.463 --rc geninfo_unexecuted_blocks=1 00:15:43.463 00:15:43.463 ' 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:43.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.463 --rc genhtml_branch_coverage=1 00:15:43.463 --rc genhtml_function_coverage=1 00:15:43.463 --rc genhtml_legend=1 00:15:43.463 --rc geninfo_all_blocks=1 00:15:43.463 --rc geninfo_unexecuted_blocks=1 00:15:43.463 00:15:43.463 ' 00:15:43.463 15:15:41 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:43.463 15:15:41 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:43.463 15:15:41 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:43.463 15:15:41 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:43.463 15:15:41 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:43.463 15:15:41 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:43.463 15:15:41 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:43.463 15:15:41 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:43.463 15:15:41 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:43.463 15:15:41 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:43.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.463 15:15:41 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=84397 00:15:43.463 15:15:41 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:43.463 15:15:41 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 84397 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84397 ']' 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:43.463 15:15:41 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:43.463 15:15:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.463 [2024-10-01 15:15:41.956028] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:15:43.463 [2024-10-01 15:15:41.956161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84397 ] 00:15:43.723 [2024-10-01 15:15:42.125400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:43.723 [2024-10-01 15:15:42.172605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.723 [2024-10-01 15:15:42.172705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.291 15:15:42 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:44.291 15:15:42 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:44.291 15:15:42 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:44.291 15:15:42 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.291 15:15:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.291 [2024-10-01 15:15:42.763195] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:44.291 [2024-10-01 15:15:42.764653] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:44.291 15:15:42 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.291 15:15:42 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:44.291 15:15:42 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.291 15:15:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.291 malloc0 00:15:44.291 15:15:42 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.291 15:15:42 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:44.291 15:15:42 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.291 15:15:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:44.291 [2024-10-01 15:15:42.803615] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:44.291 [2024-10-01 15:15:42.803746] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:44.291 [2024-10-01 15:15:42.803757] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:44.291 [2024-10-01 15:15:42.803768] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:44.291 [2024-10-01 15:15:42.812292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:44.291 [2024-10-01 15:15:42.812328] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:44.291 [2024-10-01 15:15:42.818228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:44.291 [2024-10-01 15:15:42.818376] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:44.291 [2024-10-01 15:15:42.833232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:44.550 1 00:15:44.550 15:15:42 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.550 15:15:42 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:45.485 15:15:43 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=84427 00:15:45.485 15:15:43 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:45.485 15:15:43 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:45.485 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.485 fio-3.35 00:15:45.485 Starting 1 process 00:15:50.794 15:15:48 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 84397 00:15:50.794 15:15:48 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:56.069 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 84397 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:56.069 15:15:53 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=84537 00:15:56.069 15:15:53 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:56.069 15:15:53 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.069 15:15:53 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 84537 00:15:56.069 15:15:53 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84537 ']' 00:15:56.069 15:15:53 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.069 15:15:53 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.069 15:15:53 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.069 15:15:53 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.069 15:15:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.069 [2024-10-01 15:15:53.974216] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:15:56.069 [2024-10-01 15:15:53.974396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84537 ] 00:15:56.069 [2024-10-01 15:15:54.150316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:56.069 [2024-10-01 15:15:54.204166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.069 [2024-10-01 15:15:54.204305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.326 15:15:54 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.326 15:15:54 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:56.326 15:15:54 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:56.326 15:15:54 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.326 15:15:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.326 [2024-10-01 15:15:54.830193] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:56.326 [2024-10-01 15:15:54.831701] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:56.326 15:15:54 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.326 15:15:54 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:56.326 15:15:54 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.326 15:15:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.326 malloc0 00:15:56.326 15:15:54 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.583 15:15:54 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:56.583 15:15:54 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.583 15:15:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.583 [2024-10-01 15:15:54.877384] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:56.583 [2024-10-01 15:15:54.877440] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:56.583 [2024-10-01 15:15:54.877459] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:56.583 [2024-10-01 15:15:54.885266] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:56.583 [2024-10-01 15:15:54.885298] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:15:56.583 [2024-10-01 15:15:54.885318] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:56.583 [2024-10-01 15:15:54.885440] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:56.583 1 00:15:56.583 15:15:54 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.583 15:15:54 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 84427 00:15:56.583 [2024-10-01 15:15:54.893241] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:56.583 [2024-10-01 15:15:54.900062] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:56.583 [2024-10-01 15:15:54.907504] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:56.583 [2024-10-01 15:15:54.907535] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:52.863 00:16:52.864 fio_test: (groupid=0, jobs=1): err= 0: pid=84434: Tue Oct 1 15:16:44 2024 00:16:52.864 read: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(4913MiB/60002msec) 00:16:52.864 slat (nsec): min=1936, max=567948, avg=7629.42, stdev=2836.11 00:16:52.864 clat (usec): min=1268, max=6068.0k, avg=3003.65, stdev=43261.34 00:16:52.864 lat (usec): min=1281, max=6068.0k, avg=3011.28, stdev=43261.34 00:16:52.864 clat percentiles (usec): 00:16:52.864 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:16:52.864 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:16:52.864 | 70.00th=[ 2606], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3982], 00:16:52.864 | 99.00th=[ 5473], 99.50th=[ 5997], 99.90th=[ 7308], 99.95th=[ 9241], 00:16:52.864 | 99.99th=[13173] 00:16:52.864 bw ( KiB/s): min=15872, max=105776, per=100.00%, avg=92509.77, stdev=14464.58, samples=108 00:16:52.864 iops : min= 3968, max=26444, avg=23127.41, stdev=3616.15, samples=108 00:16:52.864 write: IOPS=20.9k, BW=81.8MiB/s (85.8MB/s)(4908MiB/60002msec); 0 zone resets 00:16:52.864 slat (nsec): min=1961, max=1099.0k, avg=7697.60, stdev=3091.84 00:16:52.864 clat (usec): min=1359, max=6068.2k, avg=3089.19, stdev=43280.89 00:16:52.864 lat (usec): min=1372, max=6068.2k, avg=3096.89, stdev=43280.88 00:16:52.864 clat percentiles (usec): 00:16:52.864 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2409], 00:16:52.864 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:16:52.864 | 70.00th=[ 2704], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3949], 00:16:52.864 | 99.00th=[ 5473], 99.50th=[ 6063], 99.90th=[ 7504], 99.95th=[ 9372], 00:16:52.864 | 99.99th=[13042] 00:16:52.864 bw ( KiB/s): min=16502, max=104296, per=100.00%, avg=92426.93, stdev=14265.68, samples=108 00:16:52.864 iops : min= 4125, max=26074, avg=23106.69, stdev=3566.45, samples=108 00:16:52.864 lat (msec) : 2=1.09%, 4=94.06%, 10=4.81%, 20=0.03%, >=2000=0.01% 00:16:52.864 cpu : usr=12.05%, sys=32.28%, ctx=107088, majf=0, minf=13 00:16:52.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:52.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:52.864 issued rwts: total=1257635,1256514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:52.864 00:16:52.864 Run status group 0 (all jobs): 00:16:52.864 READ: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=4913MiB (5151MB), run=60002-60002msec 00:16:52.864 WRITE: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=4908MiB (5147MB), run=60002-60002msec 00:16:52.864 00:16:52.864 Disk stats (read/write): 00:16:52.864 ublkb1: ios=1255498/1254458, merge=0/0, ticks=3657111/3627767, in_queue=7284879, util=99.95% 00:16:52.864 15:16:44 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:52.864 [2024-10-01 15:16:44.124626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:52.864 [2024-10-01 15:16:44.160290] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:52.864 [2024-10-01 15:16:44.160712] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:52.864 [2024-10-01 15:16:44.167213] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:52.864 [2024-10-01 15:16:44.167450] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:52.864 [2024-10-01 15:16:44.167463] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.864 15:16:44 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:52.864 [2024-10-01 15:16:44.173407] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:52.864 [2024-10-01 15:16:44.177026] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:52.864 [2024-10-01 15:16:44.177091] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.864 15:16:44 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:52.864 15:16:44 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:52.864 15:16:44 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 84537 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 84537 ']' 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 84537 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84537 00:16:52.864 killing process with pid 84537 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84537' 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@969 -- # kill 84537 00:16:52.864 15:16:44 ublk_recovery -- common/autotest_common.sh@974 -- # wait 84537 00:16:52.864 [2024-10-01 15:16:44.532720] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:52.864 [2024-10-01 15:16:44.532819] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:52.864 00:16:52.864 real 1m3.414s 00:16:52.864 user 1m44.808s 00:16:52.864 sys 0m37.997s 00:16:52.864 ************************************ 00:16:52.864 END TEST ublk_recovery 00:16:52.864 ************************************ 00:16:52.864 15:16:45 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.864 15:16:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:52.864 15:16:45 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:52.864 15:16:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.864 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:16:52.864 15:16:45 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:16:52.864 15:16:45 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:52.864 15:16:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:52.864 15:16:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.864 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:16:52.864 ************************************ 00:16:52.864 START TEST ftl 00:16:52.864 ************************************ 00:16:52.864 15:16:45 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:52.864 * Looking for test storage... 00:16:52.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.864 15:16:45 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:52.864 15:16:45 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:16:52.864 15:16:45 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:52.864 15:16:45 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:52.864 15:16:45 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.864 15:16:45 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.864 15:16:45 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.864 15:16:45 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.864 15:16:45 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.864 15:16:45 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.864 15:16:45 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.864 15:16:45 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.864 15:16:45 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.864 15:16:45 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.864 15:16:45 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.864 15:16:45 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:52.864 15:16:45 ftl -- scripts/common.sh@345 -- # : 1 00:16:52.864 15:16:45 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.864 15:16:45 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.864 15:16:45 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:52.864 15:16:45 ftl -- scripts/common.sh@353 -- # local d=1 00:16:52.864 15:16:45 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.864 15:16:45 ftl -- scripts/common.sh@355 -- # echo 1 00:16:52.864 15:16:45 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.864 15:16:45 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:52.864 15:16:45 ftl -- scripts/common.sh@353 -- # local d=2 00:16:52.864 15:16:45 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.864 15:16:45 ftl -- scripts/common.sh@355 -- # echo 2 00:16:52.864 15:16:45 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.864 15:16:45 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.864 15:16:45 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.864 15:16:45 ftl -- scripts/common.sh@368 -- # return 0 00:16:52.864 15:16:45 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.864 15:16:45 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:52.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.864 --rc genhtml_branch_coverage=1 00:16:52.864 --rc genhtml_function_coverage=1 00:16:52.864 --rc genhtml_legend=1 00:16:52.864 --rc geninfo_all_blocks=1 00:16:52.864 --rc geninfo_unexecuted_blocks=1 00:16:52.864 00:16:52.864 ' 00:16:52.864 15:16:45 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:52.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.864 --rc genhtml_branch_coverage=1 00:16:52.864 --rc genhtml_function_coverage=1 00:16:52.864 --rc genhtml_legend=1 00:16:52.864 --rc geninfo_all_blocks=1 00:16:52.864 --rc geninfo_unexecuted_blocks=1 00:16:52.864 00:16:52.864 ' 00:16:52.864 15:16:45 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:52.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.865 --rc genhtml_branch_coverage=1 00:16:52.865 --rc genhtml_function_coverage=1 00:16:52.865 --rc genhtml_legend=1 00:16:52.865 --rc geninfo_all_blocks=1 00:16:52.865 --rc geninfo_unexecuted_blocks=1 00:16:52.865 00:16:52.865 ' 00:16:52.865 15:16:45 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:52.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.865 --rc genhtml_branch_coverage=1 00:16:52.865 --rc genhtml_function_coverage=1 00:16:52.865 --rc genhtml_legend=1 00:16:52.865 --rc geninfo_all_blocks=1 00:16:52.865 --rc geninfo_unexecuted_blocks=1 00:16:52.865 00:16:52.865 ' 00:16:52.865 15:16:45 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:52.865 15:16:45 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:52.865 15:16:45 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.865 15:16:45 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.865 15:16:45 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:52.865 15:16:45 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:52.865 15:16:45 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.865 15:16:45 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:52.865 15:16:45 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:52.865 15:16:45 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.865 15:16:45 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.865 15:16:45 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:52.865 15:16:45 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:52.865 15:16:45 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:52.865 15:16:45 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:52.865 15:16:45 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:52.865 15:16:45 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:52.865 15:16:45 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.865 15:16:45 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.865 15:16:45 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:52.865 15:16:45 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:52.865 15:16:45 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:52.865 15:16:45 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:52.865 15:16:45 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:52.865 15:16:45 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:52.865 15:16:45 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:52.865 15:16:45 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:52.865 15:16:45 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:52.865 15:16:45 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:52.865 15:16:45 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.865 15:16:45 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:52.865 15:16:45 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:52.865 15:16:45 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:52.865 15:16:45 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:52.865 15:16:45 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:52.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.865 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.865 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.865 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.865 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.865 15:16:46 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=85331 00:16:52.865 15:16:46 ftl -- ftl/ftl.sh@38 -- # waitforlisten 85331 00:16:52.865 15:16:46 ftl -- common/autotest_common.sh@831 -- # '[' -z 85331 ']' 00:16:52.865 15:16:46 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.865 15:16:46 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.865 15:16:46 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.865 15:16:46 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.865 15:16:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:52.865 15:16:46 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:52.865 [2024-10-01 15:16:46.409124] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:16:52.865 [2024-10-01 15:16:46.409297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85331 ] 00:16:52.865 [2024-10-01 15:16:46.583259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.865 [2024-10-01 15:16:46.637233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.865 15:16:47 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.865 15:16:47 ftl -- common/autotest_common.sh@864 -- # return 0 00:16:52.865 15:16:47 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:52.865 15:16:47 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:52.865 15:16:47 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:52.865 15:16:47 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@50 -- # break 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@63 -- # break 00:16:52.865 15:16:48 ftl -- ftl/ftl.sh@66 -- # killprocess 85331 00:16:52.865 15:16:48 ftl -- common/autotest_common.sh@950 -- # '[' -z 85331 ']' 00:16:52.865 15:16:48 ftl -- common/autotest_common.sh@954 -- # kill -0 85331 00:16:52.865 15:16:48 ftl -- common/autotest_common.sh@955 -- # uname 00:16:52.865 15:16:48 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.865 15:16:48 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85331 00:16:52.865 15:16:48 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.865 killing process with pid 85331 00:16:52.865 15:16:48 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.865 15:16:48 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85331' 00:16:52.865 15:16:48 ftl -- common/autotest_common.sh@969 -- # kill 85331 00:16:52.865 15:16:48 ftl -- common/autotest_common.sh@974 -- # wait 85331 00:16:52.865 15:16:49 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:52.865 15:16:49 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:52.865 15:16:49 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:52.865 15:16:49 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.865 15:16:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:52.865 ************************************ 00:16:52.865 START TEST ftl_fio_basic 00:16:52.865 ************************************ 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:52.865 * Looking for test storage... 00:16:52.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.865 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:52.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.866 --rc genhtml_branch_coverage=1 00:16:52.866 --rc genhtml_function_coverage=1 00:16:52.866 --rc genhtml_legend=1 00:16:52.866 --rc geninfo_all_blocks=1 00:16:52.866 --rc geninfo_unexecuted_blocks=1 00:16:52.866 00:16:52.866 ' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:52.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.866 --rc genhtml_branch_coverage=1 00:16:52.866 --rc genhtml_function_coverage=1 00:16:52.866 --rc genhtml_legend=1 00:16:52.866 --rc geninfo_all_blocks=1 00:16:52.866 --rc geninfo_unexecuted_blocks=1 00:16:52.866 00:16:52.866 ' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:52.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.866 --rc genhtml_branch_coverage=1 00:16:52.866 --rc genhtml_function_coverage=1 00:16:52.866 --rc genhtml_legend=1 00:16:52.866 --rc geninfo_all_blocks=1 00:16:52.866 --rc geninfo_unexecuted_blocks=1 00:16:52.866 00:16:52.866 ' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:52.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.866 --rc genhtml_branch_coverage=1 00:16:52.866 --rc genhtml_function_coverage=1 00:16:52.866 --rc genhtml_legend=1 00:16:52.866 --rc geninfo_all_blocks=1 00:16:52.866 --rc geninfo_unexecuted_blocks=1 00:16:52.866 00:16:52.866 ' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=85452 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 85452 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 85452 ']' 00:16:52.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.866 15:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:52.866 [2024-10-01 15:16:49.743467] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:16:52.866 [2024-10-01 15:16:49.744057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85452 ] 00:16:52.866 [2024-10-01 15:16:49.922448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:52.866 [2024-10-01 15:16:50.007058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.866 [2024-10-01 15:16:50.007207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.866 [2024-10-01 15:16:50.007355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:52.866 15:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:52.866 15:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:52.866 { 00:16:52.866 "name": "nvme0n1", 00:16:52.866 "aliases": [ 00:16:52.866 "23e11cb5-939c-42e6-b117-124f1cc0f877" 00:16:52.866 ], 00:16:52.866 "product_name": "NVMe disk", 00:16:52.866 "block_size": 4096, 00:16:52.866 "num_blocks": 1310720, 00:16:52.866 "uuid": "23e11cb5-939c-42e6-b117-124f1cc0f877", 00:16:52.866 "numa_id": -1, 00:16:52.866 "assigned_rate_limits": { 00:16:52.866 "rw_ios_per_sec": 0, 00:16:52.866 "rw_mbytes_per_sec": 0, 00:16:52.866 "r_mbytes_per_sec": 0, 00:16:52.866 "w_mbytes_per_sec": 0 00:16:52.866 }, 00:16:52.866 "claimed": false, 00:16:52.866 "zoned": false, 00:16:52.866 "supported_io_types": { 00:16:52.866 "read": true, 00:16:52.866 "write": true, 00:16:52.866 "unmap": true, 00:16:52.866 "flush": true, 00:16:52.866 "reset": true, 00:16:52.866 "nvme_admin": true, 00:16:52.866 "nvme_io": true, 00:16:52.866 "nvme_io_md": false, 00:16:52.866 "write_zeroes": true, 00:16:52.866 "zcopy": false, 00:16:52.866 "get_zone_info": false, 00:16:52.866 "zone_management": false, 00:16:52.866 "zone_append": false, 00:16:52.866 "compare": true, 00:16:52.866 "compare_and_write": false, 00:16:52.866 "abort": true, 00:16:52.866 "seek_hole": false, 00:16:52.866 "seek_data": false, 00:16:52.866 "copy": true, 00:16:52.866 "nvme_iov_md": false 00:16:52.866 }, 00:16:52.866 "driver_specific": { 00:16:52.866 "nvme": [ 00:16:52.866 { 00:16:52.867 "pci_address": "0000:00:11.0", 00:16:52.867 "trid": { 00:16:52.867 "trtype": "PCIe", 00:16:52.867 "traddr": "0000:00:11.0" 00:16:52.867 }, 00:16:52.867 "ctrlr_data": { 00:16:52.867 "cntlid": 0, 00:16:52.867 "vendor_id": "0x1b36", 00:16:52.867 "model_number": "QEMU NVMe Ctrl", 00:16:52.867 "serial_number": "12341", 00:16:52.867 "firmware_revision": "8.0.0", 00:16:52.867 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:52.867 "oacs": { 00:16:52.867 "security": 0, 00:16:52.867 "format": 1, 00:16:52.867 "firmware": 0, 00:16:52.867 "ns_manage": 1 00:16:52.867 }, 00:16:52.867 "multi_ctrlr": false, 00:16:52.867 "ana_reporting": false 00:16:52.867 }, 00:16:52.867 "vs": { 00:16:52.867 "nvme_version": "1.4" 00:16:52.867 }, 00:16:52.867 "ns_data": { 00:16:52.867 "id": 1, 00:16:52.867 "can_share": false 00:16:52.867 } 00:16:52.867 } 00:16:52.867 ], 00:16:52.867 "mp_policy": "active_passive" 00:16:52.867 } 00:16:52.867 } 00:16:52.867 ]' 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:52.867 15:16:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:53.184 15:16:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:53.184 15:16:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:53.444 15:16:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=a09f4a01-bfd4-49a6-8c5e-d2254c40c8e7 00:16:53.444 15:16:51 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a09f4a01-bfd4-49a6-8c5e-d2254c40c8e7 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=58dd7623-371f-4556-b524-c0392a3eef41 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 58dd7623-371f-4556-b524-c0392a3eef41 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=58dd7623-371f-4556-b524-c0392a3eef41 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 58dd7623-371f-4556-b524-c0392a3eef41 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=58dd7623-371f-4556-b524-c0392a3eef41 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:53.701 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58dd7623-371f-4556-b524-c0392a3eef41 00:16:53.959 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:53.959 { 00:16:53.959 "name": "58dd7623-371f-4556-b524-c0392a3eef41", 00:16:53.959 "aliases": [ 00:16:53.959 "lvs/nvme0n1p0" 00:16:53.959 ], 00:16:53.959 "product_name": "Logical Volume", 00:16:53.959 "block_size": 4096, 00:16:53.959 "num_blocks": 26476544, 00:16:53.959 "uuid": "58dd7623-371f-4556-b524-c0392a3eef41", 00:16:53.959 "assigned_rate_limits": { 00:16:53.959 "rw_ios_per_sec": 0, 00:16:53.959 "rw_mbytes_per_sec": 0, 00:16:53.959 "r_mbytes_per_sec": 0, 00:16:53.959 "w_mbytes_per_sec": 0 00:16:53.959 }, 00:16:53.959 "claimed": false, 00:16:53.959 "zoned": false, 00:16:53.959 "supported_io_types": { 00:16:53.959 "read": true, 00:16:53.959 "write": true, 00:16:53.959 "unmap": true, 00:16:53.959 "flush": false, 00:16:53.959 "reset": true, 00:16:53.959 "nvme_admin": false, 00:16:53.959 "nvme_io": false, 00:16:53.959 "nvme_io_md": false, 00:16:53.959 "write_zeroes": true, 00:16:53.959 "zcopy": false, 00:16:53.959 "get_zone_info": false, 00:16:53.959 "zone_management": false, 00:16:53.959 "zone_append": false, 00:16:53.959 "compare": false, 00:16:53.959 "compare_and_write": false, 00:16:53.959 "abort": false, 00:16:53.959 "seek_hole": true, 00:16:53.959 "seek_data": true, 00:16:53.959 "copy": false, 00:16:53.959 "nvme_iov_md": false 00:16:53.959 }, 00:16:53.959 "driver_specific": { 00:16:53.959 "lvol": { 00:16:53.959 "lvol_store_uuid": "a09f4a01-bfd4-49a6-8c5e-d2254c40c8e7", 00:16:53.959 "base_bdev": "nvme0n1", 00:16:53.959 "thin_provision": true, 00:16:53.959 "num_allocated_clusters": 0, 00:16:53.959 "snapshot": false, 00:16:53.959 "clone": false, 00:16:53.959 "esnap_clone": false 00:16:53.959 } 00:16:53.959 } 00:16:53.959 } 00:16:53.959 ]' 00:16:53.959 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:53.959 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:53.959 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:53.959 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:53.959 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:53.959 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:53.959 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:53.959 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:53.959 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:54.219 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:54.219 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:54.219 15:16:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 58dd7623-371f-4556-b524-c0392a3eef41 00:16:54.219 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=58dd7623-371f-4556-b524-c0392a3eef41 00:16:54.219 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:54.219 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:54.219 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:54.219 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58dd7623-371f-4556-b524-c0392a3eef41 00:16:54.478 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:54.478 { 00:16:54.478 "name": "58dd7623-371f-4556-b524-c0392a3eef41", 00:16:54.478 "aliases": [ 00:16:54.478 "lvs/nvme0n1p0" 00:16:54.478 ], 00:16:54.478 "product_name": "Logical Volume", 00:16:54.478 "block_size": 4096, 00:16:54.478 "num_blocks": 26476544, 00:16:54.478 "uuid": "58dd7623-371f-4556-b524-c0392a3eef41", 00:16:54.478 "assigned_rate_limits": { 00:16:54.478 "rw_ios_per_sec": 0, 00:16:54.478 "rw_mbytes_per_sec": 0, 00:16:54.478 "r_mbytes_per_sec": 0, 00:16:54.478 "w_mbytes_per_sec": 0 00:16:54.478 }, 00:16:54.478 "claimed": false, 00:16:54.478 "zoned": false, 00:16:54.478 "supported_io_types": { 00:16:54.478 "read": true, 00:16:54.478 "write": true, 00:16:54.478 "unmap": true, 00:16:54.478 "flush": false, 00:16:54.478 "reset": true, 00:16:54.478 "nvme_admin": false, 00:16:54.478 "nvme_io": false, 00:16:54.478 "nvme_io_md": false, 00:16:54.478 "write_zeroes": true, 00:16:54.478 "zcopy": false, 00:16:54.478 "get_zone_info": false, 00:16:54.478 "zone_management": false, 00:16:54.478 "zone_append": false, 00:16:54.478 "compare": false, 00:16:54.478 "compare_and_write": false, 00:16:54.478 "abort": false, 00:16:54.478 "seek_hole": true, 00:16:54.478 "seek_data": true, 00:16:54.478 "copy": false, 00:16:54.478 "nvme_iov_md": false 00:16:54.478 }, 00:16:54.478 "driver_specific": { 00:16:54.478 "lvol": { 00:16:54.478 "lvol_store_uuid": "a09f4a01-bfd4-49a6-8c5e-d2254c40c8e7", 00:16:54.478 "base_bdev": "nvme0n1", 00:16:54.478 "thin_provision": true, 00:16:54.478 "num_allocated_clusters": 0, 00:16:54.478 "snapshot": false, 00:16:54.478 "clone": false, 00:16:54.478 "esnap_clone": false 00:16:54.478 } 00:16:54.478 } 00:16:54.478 } 00:16:54.478 ]' 00:16:54.478 15:16:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:54.736 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:54.736 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:54.736 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:54.736 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:54.736 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:54.736 15:16:53 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:54.736 15:16:53 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:54.995 15:16:53 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:54.995 15:16:53 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:54.995 15:16:53 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:54.995 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:54.995 15:16:53 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 58dd7623-371f-4556-b524-c0392a3eef41 00:16:54.995 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=58dd7623-371f-4556-b524-c0392a3eef41 00:16:54.995 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:54.995 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:54.995 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:54.995 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58dd7623-371f-4556-b524-c0392a3eef41 00:16:55.253 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:55.253 { 00:16:55.253 "name": "58dd7623-371f-4556-b524-c0392a3eef41", 00:16:55.253 "aliases": [ 00:16:55.253 "lvs/nvme0n1p0" 00:16:55.253 ], 00:16:55.253 "product_name": "Logical Volume", 00:16:55.253 "block_size": 4096, 00:16:55.253 "num_blocks": 26476544, 00:16:55.253 "uuid": "58dd7623-371f-4556-b524-c0392a3eef41", 00:16:55.253 "assigned_rate_limits": { 00:16:55.253 "rw_ios_per_sec": 0, 00:16:55.253 "rw_mbytes_per_sec": 0, 00:16:55.253 "r_mbytes_per_sec": 0, 00:16:55.253 "w_mbytes_per_sec": 0 00:16:55.253 }, 00:16:55.253 "claimed": false, 00:16:55.253 "zoned": false, 00:16:55.253 "supported_io_types": { 00:16:55.253 "read": true, 00:16:55.253 "write": true, 00:16:55.253 "unmap": true, 00:16:55.253 "flush": false, 00:16:55.253 "reset": true, 00:16:55.253 "nvme_admin": false, 00:16:55.253 "nvme_io": false, 00:16:55.253 "nvme_io_md": false, 00:16:55.253 "write_zeroes": true, 00:16:55.253 "zcopy": false, 00:16:55.253 "get_zone_info": false, 00:16:55.253 "zone_management": false, 00:16:55.253 "zone_append": false, 00:16:55.253 "compare": false, 00:16:55.253 "compare_and_write": false, 00:16:55.253 "abort": false, 00:16:55.253 "seek_hole": true, 00:16:55.253 "seek_data": true, 00:16:55.253 "copy": false, 00:16:55.253 "nvme_iov_md": false 00:16:55.253 }, 00:16:55.253 "driver_specific": { 00:16:55.253 "lvol": { 00:16:55.253 "lvol_store_uuid": "a09f4a01-bfd4-49a6-8c5e-d2254c40c8e7", 00:16:55.253 "base_bdev": "nvme0n1", 00:16:55.253 "thin_provision": true, 00:16:55.253 "num_allocated_clusters": 0, 00:16:55.253 "snapshot": false, 00:16:55.253 "clone": false, 00:16:55.253 "esnap_clone": false 00:16:55.253 } 00:16:55.253 } 00:16:55.253 } 00:16:55.253 ]' 00:16:55.253 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:55.253 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:55.253 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:55.253 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:55.253 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:55.253 15:16:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:55.253 15:16:53 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:55.253 15:16:53 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:55.253 15:16:53 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 58dd7623-371f-4556-b524-c0392a3eef41 -c nvc0n1p0 --l2p_dram_limit 60 00:16:55.513 [2024-10-01 15:16:53.859695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.513 [2024-10-01 15:16:53.859797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:55.513 [2024-10-01 15:16:53.859817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:55.513 [2024-10-01 15:16:53.859832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.513 [2024-10-01 15:16:53.859946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.513 [2024-10-01 15:16:53.859964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.513 [2024-10-01 15:16:53.859980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:55.513 [2024-10-01 15:16:53.859998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.513 [2024-10-01 15:16:53.860068] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:55.513 [2024-10-01 15:16:53.860461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:55.513 [2024-10-01 15:16:53.860490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.513 [2024-10-01 15:16:53.860505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.513 [2024-10-01 15:16:53.860533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:16:55.513 [2024-10-01 15:16:53.860548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.513 [2024-10-01 15:16:53.860811] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 718a5607-6fb7-403a-ac00-cb9cc4eda933 00:16:55.513 [2024-10-01 15:16:53.862553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.513 [2024-10-01 15:16:53.862688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:55.513 [2024-10-01 15:16:53.862787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:55.513 [2024-10-01 15:16:53.862827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.513 [2024-10-01 15:16:53.870741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.513 [2024-10-01 15:16:53.871052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.513 [2024-10-01 15:16:53.871151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.750 ms 00:16:55.513 [2024-10-01 15:16:53.871220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.513 [2024-10-01 15:16:53.871515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.513 [2024-10-01 15:16:53.871620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.513 [2024-10-01 15:16:53.871716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:16:55.513 [2024-10-01 15:16:53.871776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.513 [2024-10-01 15:16:53.871941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.513 [2024-10-01 15:16:53.871982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:55.513 [2024-10-01 15:16:53.872021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:55.513 [2024-10-01 15:16:53.872112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.513 [2024-10-01 15:16:53.872252] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:55.513 [2024-10-01 15:16:53.874356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.513 [2024-10-01 15:16:53.874512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.513 [2024-10-01 15:16:53.874593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.123 ms 00:16:55.513 [2024-10-01 15:16:53.874635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.513 [2024-10-01 15:16:53.874743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.513 [2024-10-01 15:16:53.874764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:55.513 [2024-10-01 15:16:53.874777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:55.513 [2024-10-01 15:16:53.874795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.513 [2024-10-01 15:16:53.874847] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:55.513 [2024-10-01 15:16:53.875021] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:55.513 [2024-10-01 15:16:53.875039] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:55.513 [2024-10-01 15:16:53.875058] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:55.513 [2024-10-01 15:16:53.875078] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:55.514 [2024-10-01 15:16:53.875094] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:55.514 [2024-10-01 15:16:53.875107] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:55.514 [2024-10-01 15:16:53.875124] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:55.514 [2024-10-01 15:16:53.875135] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:55.514 [2024-10-01 15:16:53.875150] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:55.514 [2024-10-01 15:16:53.875163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.514 [2024-10-01 15:16:53.875190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:55.514 [2024-10-01 15:16:53.875216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:16:55.514 [2024-10-01 15:16:53.875230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.514 [2024-10-01 15:16:53.875339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.514 [2024-10-01 15:16:53.875357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:55.514 [2024-10-01 15:16:53.875381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:55.514 [2024-10-01 15:16:53.875396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.514 [2024-10-01 15:16:53.875586] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:55.514 [2024-10-01 15:16:53.875604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:55.514 [2024-10-01 15:16:53.875628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.514 [2024-10-01 15:16:53.875643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.514 [2024-10-01 15:16:53.875655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:55.514 [2024-10-01 15:16:53.875671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:55.514 [2024-10-01 15:16:53.875681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:55.514 [2024-10-01 15:16:53.875694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:55.514 [2024-10-01 15:16:53.875704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:55.514 [2024-10-01 15:16:53.875731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.514 [2024-10-01 15:16:53.875741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:55.514 [2024-10-01 15:16:53.875775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:55.514 [2024-10-01 15:16:53.875785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.514 [2024-10-01 15:16:53.875801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:55.514 [2024-10-01 15:16:53.875812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:55.514 [2024-10-01 15:16:53.875826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.514 [2024-10-01 15:16:53.875836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:55.514 [2024-10-01 15:16:53.875849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:55.514 [2024-10-01 15:16:53.875859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.514 [2024-10-01 15:16:53.875873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:55.514 [2024-10-01 15:16:53.875899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:55.514 [2024-10-01 15:16:53.875913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.514 [2024-10-01 15:16:53.875923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:55.514 [2024-10-01 15:16:53.875936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:55.514 [2024-10-01 15:16:53.875946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.514 [2024-10-01 15:16:53.875959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:55.514 [2024-10-01 15:16:53.875969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:55.514 [2024-10-01 15:16:53.875982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.514 [2024-10-01 15:16:53.875992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:55.514 [2024-10-01 15:16:53.876008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:55.514 [2024-10-01 15:16:53.876018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.514 [2024-10-01 15:16:53.876031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:55.514 [2024-10-01 15:16:53.876041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:55.514 [2024-10-01 15:16:53.876054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.514 [2024-10-01 15:16:53.876063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:55.514 [2024-10-01 15:16:53.876078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:55.514 [2024-10-01 15:16:53.876088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.514 [2024-10-01 15:16:53.876102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:55.514 [2024-10-01 15:16:53.876113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:55.514 [2024-10-01 15:16:53.876125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.514 [2024-10-01 15:16:53.876135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:55.514 [2024-10-01 15:16:53.876149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:55.514 [2024-10-01 15:16:53.876159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.514 [2024-10-01 15:16:53.876183] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:55.514 [2024-10-01 15:16:53.876196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:55.514 [2024-10-01 15:16:53.876213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.514 [2024-10-01 15:16:53.876224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.514 [2024-10-01 15:16:53.876238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:55.514 [2024-10-01 15:16:53.876249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:55.514 [2024-10-01 15:16:53.876262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:55.514 [2024-10-01 15:16:53.876273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:55.514 [2024-10-01 15:16:53.876286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:55.514 [2024-10-01 15:16:53.876297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:55.514 [2024-10-01 15:16:53.876315] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:55.514 [2024-10-01 15:16:53.876333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.514 [2024-10-01 15:16:53.876349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:55.514 [2024-10-01 15:16:53.876361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:55.514 [2024-10-01 15:16:53.876375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:55.514 [2024-10-01 15:16:53.876386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:55.514 [2024-10-01 15:16:53.876402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:55.514 [2024-10-01 15:16:53.876413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:55.514 [2024-10-01 15:16:53.876429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:55.514 [2024-10-01 15:16:53.876441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:55.514 [2024-10-01 15:16:53.876455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:55.514 [2024-10-01 15:16:53.876466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:55.514 [2024-10-01 15:16:53.876480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:55.514 [2024-10-01 15:16:53.876491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:55.514 [2024-10-01 15:16:53.876505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:55.514 [2024-10-01 15:16:53.876516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:55.514 [2024-10-01 15:16:53.876532] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:55.514 [2024-10-01 15:16:53.876545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.514 [2024-10-01 15:16:53.876560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:55.514 [2024-10-01 15:16:53.876572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:55.514 [2024-10-01 15:16:53.876586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:55.514 [2024-10-01 15:16:53.876597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:55.514 [2024-10-01 15:16:53.876619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.514 [2024-10-01 15:16:53.876632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:55.514 [2024-10-01 15:16:53.876649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:16:55.514 [2024-10-01 15:16:53.876674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.514 [2024-10-01 15:16:53.876829] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:55.514 [2024-10-01 15:16:53.876842] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:00.781 [2024-10-01 15:16:59.187427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.781 [2024-10-01 15:16:59.187504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:00.781 [2024-10-01 15:16:59.187527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5319.215 ms 00:17:00.781 [2024-10-01 15:16:59.187539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.781 [2024-10-01 15:16:59.208048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.781 [2024-10-01 15:16:59.208123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.781 [2024-10-01 15:16:59.208150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.401 ms 00:17:00.781 [2024-10-01 15:16:59.208187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.781 [2024-10-01 15:16:59.208370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.781 [2024-10-01 15:16:59.208386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:00.781 [2024-10-01 15:16:59.208406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:00.781 [2024-10-01 15:16:59.208421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.781 [2024-10-01 15:16:59.221123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.781 [2024-10-01 15:16:59.221191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.781 [2024-10-01 15:16:59.221213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.558 ms 00:17:00.781 [2024-10-01 15:16:59.221226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.781 [2024-10-01 15:16:59.221318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.781 [2024-10-01 15:16:59.221330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.781 [2024-10-01 15:16:59.221345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:00.781 [2024-10-01 15:16:59.221356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.782 [2024-10-01 15:16:59.221943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.782 [2024-10-01 15:16:59.221963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.782 [2024-10-01 15:16:59.221979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:17:00.782 [2024-10-01 15:16:59.221989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.782 [2024-10-01 15:16:59.222157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.782 [2024-10-01 15:16:59.222198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.782 [2024-10-01 15:16:59.222214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:00.782 [2024-10-01 15:16:59.222225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.782 [2024-10-01 15:16:59.229828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.782 [2024-10-01 15:16:59.229890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.782 [2024-10-01 15:16:59.229923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.549 ms 00:17:00.782 [2024-10-01 15:16:59.229935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.782 [2024-10-01 15:16:59.238514] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:00.782 [2024-10-01 15:16:59.256159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.782 [2024-10-01 15:16:59.256236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:00.782 [2024-10-01 15:16:59.256255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.118 ms 00:17:00.782 [2024-10-01 15:16:59.256270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.329943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.330224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:01.040 [2024-10-01 15:16:59.330257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.708 ms 00:17:01.040 [2024-10-01 15:16:59.330286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.330526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.330544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:01.040 [2024-10-01 15:16:59.330561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:17:01.040 [2024-10-01 15:16:59.330574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.334582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.334631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:01.040 [2024-10-01 15:16:59.334646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:17:01.040 [2024-10-01 15:16:59.334664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.337693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.337864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:01.040 [2024-10-01 15:16:59.337886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:17:01.040 [2024-10-01 15:16:59.337899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.338258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.338285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:01.040 [2024-10-01 15:16:59.338298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:17:01.040 [2024-10-01 15:16:59.338315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.378419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.378507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:01.040 [2024-10-01 15:16:59.378527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.109 ms 00:17:01.040 [2024-10-01 15:16:59.378561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.383526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.383583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:01.040 [2024-10-01 15:16:59.383602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.840 ms 00:17:01.040 [2024-10-01 15:16:59.383621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.387131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.387194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:01.040 [2024-10-01 15:16:59.387211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.434 ms 00:17:01.040 [2024-10-01 15:16:59.387224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.391001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.391044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:01.040 [2024-10-01 15:16:59.391059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.722 ms 00:17:01.040 [2024-10-01 15:16:59.391076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.391147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.391195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:01.040 [2024-10-01 15:16:59.391214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:01.040 [2024-10-01 15:16:59.391235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.391392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.040 [2024-10-01 15:16:59.391410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:01.040 [2024-10-01 15:16:59.391422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:01.040 [2024-10-01 15:16:59.391436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.040 [2024-10-01 15:16:59.392810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5541.585 ms, result 0 00:17:01.040 { 00:17:01.040 "name": "ftl0", 00:17:01.040 "uuid": "718a5607-6fb7-403a-ac00-cb9cc4eda933" 00:17:01.040 } 00:17:01.040 15:16:59 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:01.040 15:16:59 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:01.040 15:16:59 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:01.040 15:16:59 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:17:01.040 15:16:59 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:01.040 15:16:59 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:01.040 15:16:59 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:01.299 15:16:59 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:01.557 [ 00:17:01.557 { 00:17:01.557 "name": "ftl0", 00:17:01.557 "aliases": [ 00:17:01.557 "718a5607-6fb7-403a-ac00-cb9cc4eda933" 00:17:01.557 ], 00:17:01.557 "product_name": "FTL disk", 00:17:01.557 "block_size": 4096, 00:17:01.557 "num_blocks": 20971520, 00:17:01.557 "uuid": "718a5607-6fb7-403a-ac00-cb9cc4eda933", 00:17:01.557 "assigned_rate_limits": { 00:17:01.557 "rw_ios_per_sec": 0, 00:17:01.557 "rw_mbytes_per_sec": 0, 00:17:01.557 "r_mbytes_per_sec": 0, 00:17:01.557 "w_mbytes_per_sec": 0 00:17:01.557 }, 00:17:01.557 "claimed": false, 00:17:01.557 "zoned": false, 00:17:01.557 "supported_io_types": { 00:17:01.557 "read": true, 00:17:01.557 "write": true, 00:17:01.557 "unmap": true, 00:17:01.557 "flush": true, 00:17:01.557 "reset": false, 00:17:01.557 "nvme_admin": false, 00:17:01.557 "nvme_io": false, 00:17:01.557 "nvme_io_md": false, 00:17:01.557 "write_zeroes": true, 00:17:01.557 "zcopy": false, 00:17:01.557 "get_zone_info": false, 00:17:01.557 "zone_management": false, 00:17:01.557 "zone_append": false, 00:17:01.557 "compare": false, 00:17:01.557 "compare_and_write": false, 00:17:01.557 "abort": false, 00:17:01.557 "seek_hole": false, 00:17:01.557 "seek_data": false, 00:17:01.557 "copy": false, 00:17:01.557 "nvme_iov_md": false 00:17:01.557 }, 00:17:01.557 "driver_specific": { 00:17:01.557 "ftl": { 00:17:01.557 "base_bdev": "58dd7623-371f-4556-b524-c0392a3eef41", 00:17:01.557 "cache": "nvc0n1p0" 00:17:01.557 } 00:17:01.557 } 00:17:01.557 } 00:17:01.557 ] 00:17:01.557 15:16:59 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:17:01.557 15:16:59 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:01.557 15:16:59 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:01.814 15:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:01.814 15:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:02.073 [2024-10-01 15:17:00.391287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-10-01 15:17:00.391527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:02.073 [2024-10-01 15:17:00.391629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:02.073 [2024-10-01 15:17:00.391717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-10-01 15:17:00.391851] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.073 [2024-10-01 15:17:00.392725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-10-01 15:17:00.392859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:02.073 [2024-10-01 15:17:00.392945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:17:02.073 [2024-10-01 15:17:00.392988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-10-01 15:17:00.393978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-10-01 15:17:00.394101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:02.074 [2024-10-01 15:17:00.394203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:17:02.074 [2024-10-01 15:17:00.394225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-10-01 15:17:00.396972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-10-01 15:17:00.397018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:02.074 [2024-10-01 15:17:00.397031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:17:02.074 [2024-10-01 15:17:00.397046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-10-01 15:17:00.402445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-10-01 15:17:00.402498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:02.074 [2024-10-01 15:17:00.402511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.354 ms 00:17:02.074 [2024-10-01 15:17:00.402524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-10-01 15:17:00.404041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-10-01 15:17:00.404206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:02.074 [2024-10-01 15:17:00.404227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:17:02.074 [2024-10-01 15:17:00.404241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-10-01 15:17:00.409109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-10-01 15:17:00.409159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:02.074 [2024-10-01 15:17:00.409183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.785 ms 00:17:02.074 [2024-10-01 15:17:00.409198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-10-01 15:17:00.409475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-10-01 15:17:00.409512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:02.074 [2024-10-01 15:17:00.409525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:17:02.074 [2024-10-01 15:17:00.409550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-10-01 15:17:00.411149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-10-01 15:17:00.411198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:02.074 [2024-10-01 15:17:00.411211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.526 ms 00:17:02.074 [2024-10-01 15:17:00.411225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-10-01 15:17:00.412457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-10-01 15:17:00.412497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:02.074 [2024-10-01 15:17:00.412510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:17:02.074 [2024-10-01 15:17:00.412523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-10-01 15:17:00.413693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-10-01 15:17:00.413730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:02.074 [2024-10-01 15:17:00.413743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:17:02.074 [2024-10-01 15:17:00.413756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-10-01 15:17:00.414777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-10-01 15:17:00.414913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:02.074 [2024-10-01 15:17:00.414932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.867 ms 00:17:02.074 [2024-10-01 15:17:00.414948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-10-01 15:17:00.415007] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:02.074 [2024-10-01 15:17:00.415027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:02.074 [2024-10-01 15:17:00.415676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.415993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:02.075 [2024-10-01 15:17:00.416436] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:02.075 [2024-10-01 15:17:00.416447] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 718a5607-6fb7-403a-ac00-cb9cc4eda933 00:17:02.075 [2024-10-01 15:17:00.416469] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:02.075 [2024-10-01 15:17:00.416479] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:02.075 [2024-10-01 15:17:00.416496] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:02.075 [2024-10-01 15:17:00.416507] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:02.075 [2024-10-01 15:17:00.416520] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:02.075 [2024-10-01 15:17:00.416531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:02.075 [2024-10-01 15:17:00.416544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:02.075 [2024-10-01 15:17:00.416554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:02.075 [2024-10-01 15:17:00.416566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:02.075 [2024-10-01 15:17:00.416577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-10-01 15:17:00.416591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:02.075 [2024-10-01 15:17:00.416602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:17:02.075 [2024-10-01 15:17:00.416615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-10-01 15:17:00.418595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-10-01 15:17:00.418623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:02.075 [2024-10-01 15:17:00.418635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.908 ms 00:17:02.075 [2024-10-01 15:17:00.418648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-10-01 15:17:00.418780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-10-01 15:17:00.418794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:02.075 [2024-10-01 15:17:00.418822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:02.075 [2024-10-01 15:17:00.418847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-10-01 15:17:00.426275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.075 [2024-10-01 15:17:00.426317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.075 [2024-10-01 15:17:00.426330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.075 [2024-10-01 15:17:00.426345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-10-01 15:17:00.426436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.075 [2024-10-01 15:17:00.426451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.076 [2024-10-01 15:17:00.426462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.426476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.426606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.076 [2024-10-01 15:17:00.426631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.076 [2024-10-01 15:17:00.426642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.426656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.426699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.076 [2024-10-01 15:17:00.426713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.076 [2024-10-01 15:17:00.426724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.426738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.441671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.076 [2024-10-01 15:17:00.441900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.076 [2024-10-01 15:17:00.441924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.441938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.452230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.076 [2024-10-01 15:17:00.452404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.076 [2024-10-01 15:17:00.452492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.452538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.452726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.076 [2024-10-01 15:17:00.452837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.076 [2024-10-01 15:17:00.452974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.453016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.453209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.076 [2024-10-01 15:17:00.453255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:02.076 [2024-10-01 15:17:00.453346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.453387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.453570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.076 [2024-10-01 15:17:00.453657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:02.076 [2024-10-01 15:17:00.453728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.453803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.453916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.076 [2024-10-01 15:17:00.453962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:02.076 [2024-10-01 15:17:00.454040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.454080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.454196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.076 [2024-10-01 15:17:00.454257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:02.076 [2024-10-01 15:17:00.454346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.454390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.454557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.076 [2024-10-01 15:17:00.454639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:02.076 [2024-10-01 15:17:00.454709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.076 [2024-10-01 15:17:00.454793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.076 [2024-10-01 15:17:00.455131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.902 ms, result 0 00:17:02.076 true 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 85452 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 85452 ']' 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 85452 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85452 00:17:02.076 killing process with pid 85452 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85452' 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 85452 00:17:02.076 15:17:00 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 85452 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:05.354 15:17:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:05.354 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:05.354 fio-3.35 00:17:05.354 Starting 1 thread 00:17:09.532 00:17:09.532 test: (groupid=0, jobs=1): err= 0: pid=85662: Tue Oct 1 15:17:07 2024 00:17:09.532 read: IOPS=1129, BW=75.0MiB/s (78.6MB/s)(255MiB/3394msec) 00:17:09.532 slat (nsec): min=4410, max=61941, avg=8394.94, stdev=5103.53 00:17:09.532 clat (usec): min=260, max=9076, avg=391.28, stdev=151.39 00:17:09.532 lat (usec): min=266, max=9081, avg=399.67, stdev=152.19 00:17:09.532 clat percentiles (usec): 00:17:09.532 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 338], 00:17:09.532 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 375], 60.00th=[ 404], 00:17:09.532 | 70.00th=[ 412], 80.00th=[ 429], 90.00th=[ 469], 95.00th=[ 502], 00:17:09.532 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 652], 99.95th=[ 660], 00:17:09.532 | 99.99th=[ 9110] 00:17:09.532 write: IOPS=1137, BW=75.5MiB/s (79.2MB/s)(256MiB/3390msec); 0 zone resets 00:17:09.532 slat (usec): min=15, max=120, avg=24.99, stdev=10.21 00:17:09.532 clat (usec): min=312, max=1008, avg=445.95, stdev=73.18 00:17:09.532 lat (usec): min=330, max=1064, avg=470.94, stdev=77.55 00:17:09.532 clat percentiles (usec): 00:17:09.532 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 375], 00:17:09.532 | 30.00th=[ 416], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 445], 00:17:09.532 | 70.00th=[ 474], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 570], 00:17:09.532 | 99.00th=[ 701], 99.50th=[ 766], 99.90th=[ 898], 99.95th=[ 914], 00:17:09.532 | 99.99th=[ 1012] 00:17:09.532 bw ( KiB/s): min=67864, max=83912, per=99.96%, avg=77316.00, stdev=7044.09, samples=6 00:17:09.532 iops : min= 998, max= 1234, avg=1137.00, stdev=103.59, samples=6 00:17:09.532 lat (usec) : 500=86.62%, 750=13.10%, 1000=0.26% 00:17:09.532 lat (msec) : 2=0.01%, 10=0.01% 00:17:09.532 cpu : usr=99.03%, sys=0.18%, ctx=5, majf=0, minf=1181 00:17:09.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.532 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.532 00:17:09.532 Run status group 0 (all jobs): 00:17:09.532 READ: bw=75.0MiB/s (78.6MB/s), 75.0MiB/s-75.0MiB/s (78.6MB/s-78.6MB/s), io=255MiB (267MB), run=3394-3394msec 00:17:09.532 WRITE: bw=75.5MiB/s (79.2MB/s), 75.5MiB/s-75.5MiB/s (79.2MB/s-79.2MB/s), io=256MiB (269MB), run=3390-3390msec 00:17:10.096 ----------------------------------------------------- 00:17:10.096 Suppressions used: 00:17:10.096 count bytes template 00:17:10.096 1 5 /usr/src/fio/parse.c 00:17:10.096 1 8 libtcmalloc_minimal.so 00:17:10.096 1 904 libcrypto.so 00:17:10.096 ----------------------------------------------------- 00:17:10.096 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:10.096 15:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:10.353 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:10.353 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:10.353 fio-3.35 00:17:10.353 Starting 2 threads 00:17:42.469 00:17:42.469 first_half: (groupid=0, jobs=1): err= 0: pid=85745: Tue Oct 1 15:17:36 2024 00:17:42.469 read: IOPS=2480, BW=9923KiB/s (10.2MB/s)(256MiB/26394msec) 00:17:42.469 slat (nsec): min=3446, max=54569, avg=8071.40, stdev=4138.92 00:17:42.469 clat (usec): min=612, max=388452, avg=43247.36, stdev=33129.62 00:17:42.469 lat (usec): min=616, max=388468, avg=43255.43, stdev=33130.62 00:17:42.469 clat percentiles (msec): 00:17:42.469 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:17:42.469 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 37], 60.00th=[ 38], 00:17:42.469 | 70.00th=[ 39], 80.00th=[ 42], 90.00th=[ 48], 95.00th=[ 92], 00:17:42.469 | 99.00th=[ 213], 99.50th=[ 239], 99.90th=[ 313], 99.95th=[ 376], 00:17:42.469 | 99.99th=[ 384] 00:17:42.469 write: IOPS=2486, BW=9945KiB/s (10.2MB/s)(256MiB/26360msec); 0 zone resets 00:17:42.469 slat (usec): min=4, max=611, avg= 9.01, stdev= 7.36 00:17:42.469 clat (usec): min=383, max=60489, avg=8302.66, stdev=8075.02 00:17:42.469 lat (usec): min=394, max=60496, avg=8311.68, stdev=8075.13 00:17:42.469 clat percentiles (usec): 00:17:42.469 | 1.00th=[ 1074], 5.00th=[ 1467], 10.00th=[ 1958], 20.00th=[ 3458], 00:17:42.469 | 30.00th=[ 4948], 40.00th=[ 5800], 50.00th=[ 6652], 60.00th=[ 7308], 00:17:42.469 | 70.00th=[ 8160], 80.00th=[ 9765], 90.00th=[13566], 95.00th=[26870], 00:17:42.469 | 99.00th=[46924], 99.50th=[50070], 99.90th=[56361], 99.95th=[58459], 00:17:42.469 | 99.99th=[59507] 00:17:42.469 bw ( KiB/s): min= 264, max=51200, per=100.00%, avg=20832.20, stdev=15544.39, samples=25 00:17:42.469 iops : min= 66, max=12800, avg=5207.96, stdev=3886.15, samples=25 00:17:42.469 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.27% 00:17:42.469 lat (msec) : 2=4.89%, 4=6.55%, 10=29.01%, 20=7.45%, 50=47.10% 00:17:42.469 lat (msec) : 100=2.33%, 250=2.18%, 500=0.15% 00:17:42.469 cpu : usr=99.10%, sys=0.24%, ctx=62, majf=0, minf=5559 00:17:42.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:42.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.469 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:42.469 issued rwts: total=65476,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:42.469 second_half: (groupid=0, jobs=1): err= 0: pid=85746: Tue Oct 1 15:17:36 2024 00:17:42.469 read: IOPS=2500, BW=9.77MiB/s (10.2MB/s)(256MiB/26193msec) 00:17:42.469 slat (usec): min=3, max=117, avg= 7.86, stdev= 3.59 00:17:42.469 clat (msec): min=11, max=297, avg=42.99, stdev=26.85 00:17:42.469 lat (msec): min=11, max=297, avg=42.99, stdev=26.85 00:17:42.469 clat percentiles (msec): 00:17:42.469 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:17:42.469 | 30.00th=[ 33], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 38], 00:17:42.470 | 70.00th=[ 39], 80.00th=[ 43], 90.00th=[ 50], 95.00th=[ 83], 00:17:42.470 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 228], 99.95th=[ 243], 00:17:42.470 | 99.99th=[ 292] 00:17:42.470 write: IOPS=2518, BW=9.84MiB/s (10.3MB/s)(256MiB/26025msec); 0 zone resets 00:17:42.470 slat (usec): min=4, max=1858, avg= 9.08, stdev=12.60 00:17:42.470 clat (usec): min=377, max=51714, avg=8172.29, stdev=5628.63 00:17:42.470 lat (usec): min=390, max=51720, avg=8181.37, stdev=5629.25 00:17:42.470 clat percentiles (usec): 00:17:42.470 | 1.00th=[ 1369], 5.00th=[ 2278], 10.00th=[ 3195], 20.00th=[ 4555], 00:17:42.470 | 30.00th=[ 5473], 40.00th=[ 6456], 50.00th=[ 7111], 60.00th=[ 7898], 00:17:42.470 | 70.00th=[ 9110], 80.00th=[10683], 90.00th=[13042], 95.00th=[14877], 00:17:42.470 | 99.00th=[34866], 99.50th=[40633], 99.90th=[45876], 99.95th=[47973], 00:17:42.470 | 99.99th=[50594] 00:17:42.470 bw ( KiB/s): min= 1888, max=42608, per=97.58%, avg=19409.26, stdev=12574.75, samples=27 00:17:42.470 iops : min= 472, max=10652, avg=4852.26, stdev=3143.65, samples=27 00:17:42.470 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.13% 00:17:42.470 lat (msec) : 2=1.44%, 4=6.35%, 10=30.33%, 20=10.00%, 50=46.82% 00:17:42.470 lat (msec) : 100=2.76%, 250=2.10%, 500=0.02% 00:17:42.470 cpu : usr=99.12%, sys=0.23%, ctx=47, majf=0, minf=5579 00:17:42.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:42.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.470 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:42.470 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:42.470 00:17:42.470 Run status group 0 (all jobs): 00:17:42.470 READ: bw=19.4MiB/s (20.3MB/s), 9923KiB/s-9.77MiB/s (10.2MB/s-10.2MB/s), io=512MiB (536MB), run=26193-26394msec 00:17:42.470 WRITE: bw=19.4MiB/s (20.4MB/s), 9945KiB/s-9.84MiB/s (10.2MB/s-10.3MB/s), io=512MiB (537MB), run=26025-26360msec 00:17:42.470 ----------------------------------------------------- 00:17:42.470 Suppressions used: 00:17:42.470 count bytes template 00:17:42.470 2 10 /usr/src/fio/parse.c 00:17:42.470 4 384 /usr/src/fio/iolog.c 00:17:42.470 1 8 libtcmalloc_minimal.so 00:17:42.470 1 904 libcrypto.so 00:17:42.470 ----------------------------------------------------- 00:17:42.470 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:42.470 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:42.471 15:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:42.471 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:42.471 fio-3.35 00:17:42.471 Starting 1 thread 00:17:54.729 00:17:54.729 test: (groupid=0, jobs=1): err= 0: pid=86081: Tue Oct 1 15:17:53 2024 00:17:54.729 read: IOPS=7381, BW=28.8MiB/s (30.2MB/s)(255MiB/8833msec) 00:17:54.729 slat (usec): min=3, max=371, avg= 5.49, stdev= 2.46 00:17:54.729 clat (usec): min=688, max=33006, avg=17330.78, stdev=2217.21 00:17:54.729 lat (usec): min=693, max=33168, avg=17336.27, stdev=2217.36 00:17:54.729 clat percentiles (usec): 00:17:54.729 | 1.00th=[15664], 5.00th=[15926], 10.00th=[16188], 20.00th=[16319], 00:17:54.729 | 30.00th=[16450], 40.00th=[16581], 50.00th=[16712], 60.00th=[16909], 00:17:54.729 | 70.00th=[17171], 80.00th=[17433], 90.00th=[18482], 95.00th=[21103], 00:17:54.729 | 99.00th=[28443], 99.50th=[29492], 99.90th=[31851], 99.95th=[32375], 00:17:54.729 | 99.99th=[32900] 00:17:54.729 write: IOPS=11.3k, BW=44.1MiB/s (46.3MB/s)(256MiB/5802msec); 0 zone resets 00:17:54.729 slat (usec): min=3, max=817, avg=11.97, stdev=11.27 00:17:54.729 clat (usec): min=691, max=58115, avg=11264.98, stdev=13589.46 00:17:54.729 lat (usec): min=706, max=58123, avg=11276.94, stdev=13589.43 00:17:54.729 clat percentiles (usec): 00:17:54.729 | 1.00th=[ 1090], 5.00th=[ 1319], 10.00th=[ 1483], 20.00th=[ 1713], 00:17:54.729 | 30.00th=[ 1926], 40.00th=[ 2376], 50.00th=[ 7504], 60.00th=[ 8979], 00:17:54.729 | 70.00th=[10159], 80.00th=[12256], 90.00th=[40633], 95.00th=[42730], 00:17:54.729 | 99.00th=[45351], 99.50th=[47449], 99.90th=[54264], 99.95th=[56361], 00:17:54.729 | 99.99th=[57410] 00:17:54.729 bw ( KiB/s): min=26328, max=59672, per=96.70%, avg=43690.67, stdev=8953.08, samples=12 00:17:54.729 iops : min= 6582, max=14918, avg=10922.67, stdev=2238.27, samples=12 00:17:54.729 lat (usec) : 750=0.01%, 1000=0.19% 00:17:54.729 lat (msec) : 2=16.34%, 4=4.39%, 10=13.73%, 20=54.02%, 50=11.13% 00:17:54.729 lat (msec) : 100=0.18% 00:17:54.729 cpu : usr=98.80%, sys=0.36%, ctx=63, majf=0, minf=5577 00:17:54.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:54.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.729 complete : 0=0.0%, 4=99.6%, 8=0.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.729 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.729 00:17:54.730 Run status group 0 (all jobs): 00:17:54.730 READ: bw=28.8MiB/s (30.2MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=255MiB (267MB), run=8833-8833msec 00:17:54.730 WRITE: bw=44.1MiB/s (46.3MB/s), 44.1MiB/s-44.1MiB/s (46.3MB/s-46.3MB/s), io=256MiB (268MB), run=5802-5802msec 00:17:55.668 ----------------------------------------------------- 00:17:55.668 Suppressions used: 00:17:55.668 count bytes template 00:17:55.668 1 5 /usr/src/fio/parse.c 00:17:55.668 2 192 /usr/src/fio/iolog.c 00:17:55.668 1 8 libtcmalloc_minimal.so 00:17:55.668 1 904 libcrypto.so 00:17:55.668 ----------------------------------------------------- 00:17:55.668 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:55.668 Remove shared memory files 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70651 /dev/shm/spdk_tgt_trace.pid84397 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:55.668 ************************************ 00:17:55.668 END TEST ftl_fio_basic 00:17:55.668 ************************************ 00:17:55.668 00:17:55.668 real 1m4.811s 00:17:55.668 user 2m28.611s 00:17:55.668 sys 0m4.046s 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.668 15:17:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:55.927 15:17:54 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:55.927 15:17:54 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:55.927 15:17:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.927 15:17:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:55.927 ************************************ 00:17:55.927 START TEST ftl_bdevperf 00:17:55.927 ************************************ 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:55.927 * Looking for test storage... 00:17:55.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:55.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.927 --rc genhtml_branch_coverage=1 00:17:55.927 --rc genhtml_function_coverage=1 00:17:55.927 --rc genhtml_legend=1 00:17:55.927 --rc geninfo_all_blocks=1 00:17:55.927 --rc geninfo_unexecuted_blocks=1 00:17:55.927 00:17:55.927 ' 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:55.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.927 --rc genhtml_branch_coverage=1 00:17:55.927 --rc genhtml_function_coverage=1 00:17:55.927 --rc genhtml_legend=1 00:17:55.927 --rc geninfo_all_blocks=1 00:17:55.927 --rc geninfo_unexecuted_blocks=1 00:17:55.927 00:17:55.927 ' 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:55.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.927 --rc genhtml_branch_coverage=1 00:17:55.927 --rc genhtml_function_coverage=1 00:17:55.927 --rc genhtml_legend=1 00:17:55.927 --rc geninfo_all_blocks=1 00:17:55.927 --rc geninfo_unexecuted_blocks=1 00:17:55.927 00:17:55.927 ' 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:55.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.927 --rc genhtml_branch_coverage=1 00:17:55.927 --rc genhtml_function_coverage=1 00:17:55.927 --rc genhtml_legend=1 00:17:55.927 --rc geninfo_all_blocks=1 00:17:55.927 --rc geninfo_unexecuted_blocks=1 00:17:55.927 00:17:55.927 ' 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:55.927 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=86319 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 86319 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 86319 ']' 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.187 15:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:56.187 [2024-10-01 15:17:54.593297] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:17:56.187 [2024-10-01 15:17:54.593461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86319 ] 00:17:56.446 [2024-10-01 15:17:54.754893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.446 [2024-10-01 15:17:54.850181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.013 15:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.013 15:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:17:57.013 15:17:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:57.013 15:17:55 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:57.013 15:17:55 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:57.013 15:17:55 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:57.013 15:17:55 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:57.013 15:17:55 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:57.583 15:17:55 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:57.583 15:17:55 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:57.583 15:17:55 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:57.583 15:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:57.583 15:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:57.583 15:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:57.583 15:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:57.583 15:17:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:57.583 15:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:57.583 { 00:17:57.583 "name": "nvme0n1", 00:17:57.583 "aliases": [ 00:17:57.583 "084ce9a1-5b96-4fbc-a0c1-d6cbcc137673" 00:17:57.583 ], 00:17:57.583 "product_name": "NVMe disk", 00:17:57.583 "block_size": 4096, 00:17:57.583 "num_blocks": 1310720, 00:17:57.583 "uuid": "084ce9a1-5b96-4fbc-a0c1-d6cbcc137673", 00:17:57.583 "numa_id": -1, 00:17:57.583 "assigned_rate_limits": { 00:17:57.583 "rw_ios_per_sec": 0, 00:17:57.583 "rw_mbytes_per_sec": 0, 00:17:57.583 "r_mbytes_per_sec": 0, 00:17:57.583 "w_mbytes_per_sec": 0 00:17:57.583 }, 00:17:57.583 "claimed": true, 00:17:57.583 "claim_type": "read_many_write_one", 00:17:57.583 "zoned": false, 00:17:57.583 "supported_io_types": { 00:17:57.583 "read": true, 00:17:57.583 "write": true, 00:17:57.583 "unmap": true, 00:17:57.583 "flush": true, 00:17:57.583 "reset": true, 00:17:57.583 "nvme_admin": true, 00:17:57.583 "nvme_io": true, 00:17:57.583 "nvme_io_md": false, 00:17:57.583 "write_zeroes": true, 00:17:57.583 "zcopy": false, 00:17:57.583 "get_zone_info": false, 00:17:57.583 "zone_management": false, 00:17:57.583 "zone_append": false, 00:17:57.583 "compare": true, 00:17:57.583 "compare_and_write": false, 00:17:57.583 "abort": true, 00:17:57.583 "seek_hole": false, 00:17:57.583 "seek_data": false, 00:17:57.583 "copy": true, 00:17:57.583 "nvme_iov_md": false 00:17:57.583 }, 00:17:57.583 "driver_specific": { 00:17:57.583 "nvme": [ 00:17:57.583 { 00:17:57.583 "pci_address": "0000:00:11.0", 00:17:57.583 "trid": { 00:17:57.583 "trtype": "PCIe", 00:17:57.583 "traddr": "0000:00:11.0" 00:17:57.583 }, 00:17:57.583 "ctrlr_data": { 00:17:57.583 "cntlid": 0, 00:17:57.583 "vendor_id": "0x1b36", 00:17:57.583 "model_number": "QEMU NVMe Ctrl", 00:17:57.583 "serial_number": "12341", 00:17:57.583 "firmware_revision": "8.0.0", 00:17:57.583 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:57.583 "oacs": { 00:17:57.583 "security": 0, 00:17:57.583 "format": 1, 00:17:57.583 "firmware": 0, 00:17:57.583 "ns_manage": 1 00:17:57.583 }, 00:17:57.583 "multi_ctrlr": false, 00:17:57.583 "ana_reporting": false 00:17:57.583 }, 00:17:57.583 "vs": { 00:17:57.583 "nvme_version": "1.4" 00:17:57.583 }, 00:17:57.583 "ns_data": { 00:17:57.583 "id": 1, 00:17:57.583 "can_share": false 00:17:57.583 } 00:17:57.583 } 00:17:57.583 ], 00:17:57.583 "mp_policy": "active_passive" 00:17:57.583 } 00:17:57.583 } 00:17:57.583 ]' 00:17:57.583 15:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:57.903 15:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:57.903 15:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:57.903 15:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:57.903 15:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:57.903 15:17:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:17:57.903 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:57.903 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:57.903 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:57.903 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:57.903 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:58.167 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=a09f4a01-bfd4-49a6-8c5e-d2254c40c8e7 00:17:58.167 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:58.167 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a09f4a01-bfd4-49a6-8c5e-d2254c40c8e7 00:17:58.431 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:58.431 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c9dadc9b-67ea-41c3-9043-06aab3add9f2 00:17:58.431 15:17:56 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c9dadc9b-67ea-41c3-9043-06aab3add9f2 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:58.690 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:58.949 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:58.949 { 00:17:58.949 "name": "05b67349-12b2-403a-a66e-bd4b830f1b08", 00:17:58.949 "aliases": [ 00:17:58.949 "lvs/nvme0n1p0" 00:17:58.949 ], 00:17:58.949 "product_name": "Logical Volume", 00:17:58.949 "block_size": 4096, 00:17:58.949 "num_blocks": 26476544, 00:17:58.949 "uuid": "05b67349-12b2-403a-a66e-bd4b830f1b08", 00:17:58.949 "assigned_rate_limits": { 00:17:58.949 "rw_ios_per_sec": 0, 00:17:58.949 "rw_mbytes_per_sec": 0, 00:17:58.949 "r_mbytes_per_sec": 0, 00:17:58.949 "w_mbytes_per_sec": 0 00:17:58.949 }, 00:17:58.949 "claimed": false, 00:17:58.949 "zoned": false, 00:17:58.949 "supported_io_types": { 00:17:58.949 "read": true, 00:17:58.949 "write": true, 00:17:58.949 "unmap": true, 00:17:58.949 "flush": false, 00:17:58.949 "reset": true, 00:17:58.949 "nvme_admin": false, 00:17:58.949 "nvme_io": false, 00:17:58.949 "nvme_io_md": false, 00:17:58.949 "write_zeroes": true, 00:17:58.949 "zcopy": false, 00:17:58.949 "get_zone_info": false, 00:17:58.949 "zone_management": false, 00:17:58.949 "zone_append": false, 00:17:58.949 "compare": false, 00:17:58.949 "compare_and_write": false, 00:17:58.949 "abort": false, 00:17:58.949 "seek_hole": true, 00:17:58.949 "seek_data": true, 00:17:58.949 "copy": false, 00:17:58.949 "nvme_iov_md": false 00:17:58.949 }, 00:17:58.949 "driver_specific": { 00:17:58.949 "lvol": { 00:17:58.949 "lvol_store_uuid": "c9dadc9b-67ea-41c3-9043-06aab3add9f2", 00:17:58.949 "base_bdev": "nvme0n1", 00:17:58.949 "thin_provision": true, 00:17:58.949 "num_allocated_clusters": 0, 00:17:58.949 "snapshot": false, 00:17:58.949 "clone": false, 00:17:58.949 "esnap_clone": false 00:17:58.949 } 00:17:58.949 } 00:17:58.949 } 00:17:58.949 ]' 00:17:58.949 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:58.949 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:58.949 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:59.209 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:59.209 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:59.209 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:59.209 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:59.209 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:59.209 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:59.468 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:59.468 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:59.468 15:17:57 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:59.468 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:59.468 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:59.468 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:59.468 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:59.468 15:17:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:59.727 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:59.727 { 00:17:59.727 "name": "05b67349-12b2-403a-a66e-bd4b830f1b08", 00:17:59.727 "aliases": [ 00:17:59.727 "lvs/nvme0n1p0" 00:17:59.727 ], 00:17:59.727 "product_name": "Logical Volume", 00:17:59.727 "block_size": 4096, 00:17:59.727 "num_blocks": 26476544, 00:17:59.727 "uuid": "05b67349-12b2-403a-a66e-bd4b830f1b08", 00:17:59.727 "assigned_rate_limits": { 00:17:59.727 "rw_ios_per_sec": 0, 00:17:59.727 "rw_mbytes_per_sec": 0, 00:17:59.727 "r_mbytes_per_sec": 0, 00:17:59.727 "w_mbytes_per_sec": 0 00:17:59.727 }, 00:17:59.727 "claimed": false, 00:17:59.727 "zoned": false, 00:17:59.727 "supported_io_types": { 00:17:59.727 "read": true, 00:17:59.727 "write": true, 00:17:59.727 "unmap": true, 00:17:59.727 "flush": false, 00:17:59.727 "reset": true, 00:17:59.727 "nvme_admin": false, 00:17:59.727 "nvme_io": false, 00:17:59.727 "nvme_io_md": false, 00:17:59.727 "write_zeroes": true, 00:17:59.727 "zcopy": false, 00:17:59.727 "get_zone_info": false, 00:17:59.727 "zone_management": false, 00:17:59.727 "zone_append": false, 00:17:59.727 "compare": false, 00:17:59.727 "compare_and_write": false, 00:17:59.727 "abort": false, 00:17:59.727 "seek_hole": true, 00:17:59.727 "seek_data": true, 00:17:59.727 "copy": false, 00:17:59.727 "nvme_iov_md": false 00:17:59.727 }, 00:17:59.727 "driver_specific": { 00:17:59.727 "lvol": { 00:17:59.727 "lvol_store_uuid": "c9dadc9b-67ea-41c3-9043-06aab3add9f2", 00:17:59.727 "base_bdev": "nvme0n1", 00:17:59.727 "thin_provision": true, 00:17:59.727 "num_allocated_clusters": 0, 00:17:59.727 "snapshot": false, 00:17:59.727 "clone": false, 00:17:59.727 "esnap_clone": false 00:17:59.727 } 00:17:59.727 } 00:17:59.727 } 00:17:59.727 ]' 00:17:59.727 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:59.727 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:59.727 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:59.727 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:59.727 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:59.727 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:59.727 15:17:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:59.727 15:17:58 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:59.987 15:17:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:59.987 15:17:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:59.987 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=05b67349-12b2-403a-a66e-bd4b830f1b08 00:17:59.987 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:59.987 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:59.987 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:59.987 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05b67349-12b2-403a-a66e-bd4b830f1b08 00:18:00.247 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:00.247 { 00:18:00.247 "name": "05b67349-12b2-403a-a66e-bd4b830f1b08", 00:18:00.247 "aliases": [ 00:18:00.247 "lvs/nvme0n1p0" 00:18:00.247 ], 00:18:00.247 "product_name": "Logical Volume", 00:18:00.247 "block_size": 4096, 00:18:00.247 "num_blocks": 26476544, 00:18:00.247 "uuid": "05b67349-12b2-403a-a66e-bd4b830f1b08", 00:18:00.247 "assigned_rate_limits": { 00:18:00.247 "rw_ios_per_sec": 0, 00:18:00.247 "rw_mbytes_per_sec": 0, 00:18:00.247 "r_mbytes_per_sec": 0, 00:18:00.247 "w_mbytes_per_sec": 0 00:18:00.247 }, 00:18:00.247 "claimed": false, 00:18:00.247 "zoned": false, 00:18:00.247 "supported_io_types": { 00:18:00.247 "read": true, 00:18:00.247 "write": true, 00:18:00.247 "unmap": true, 00:18:00.247 "flush": false, 00:18:00.247 "reset": true, 00:18:00.247 "nvme_admin": false, 00:18:00.247 "nvme_io": false, 00:18:00.247 "nvme_io_md": false, 00:18:00.247 "write_zeroes": true, 00:18:00.247 "zcopy": false, 00:18:00.247 "get_zone_info": false, 00:18:00.247 "zone_management": false, 00:18:00.247 "zone_append": false, 00:18:00.247 "compare": false, 00:18:00.247 "compare_and_write": false, 00:18:00.247 "abort": false, 00:18:00.247 "seek_hole": true, 00:18:00.247 "seek_data": true, 00:18:00.247 "copy": false, 00:18:00.247 "nvme_iov_md": false 00:18:00.247 }, 00:18:00.247 "driver_specific": { 00:18:00.247 "lvol": { 00:18:00.247 "lvol_store_uuid": "c9dadc9b-67ea-41c3-9043-06aab3add9f2", 00:18:00.247 "base_bdev": "nvme0n1", 00:18:00.247 "thin_provision": true, 00:18:00.247 "num_allocated_clusters": 0, 00:18:00.247 "snapshot": false, 00:18:00.247 "clone": false, 00:18:00.247 "esnap_clone": false 00:18:00.247 } 00:18:00.247 } 00:18:00.247 } 00:18:00.247 ]' 00:18:00.247 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:00.247 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:00.247 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:00.247 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:00.247 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:00.247 15:17:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:00.247 15:17:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:00.247 15:17:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 05b67349-12b2-403a-a66e-bd4b830f1b08 -c nvc0n1p0 --l2p_dram_limit 20 00:18:00.508 [2024-10-01 15:17:58.973828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.508 [2024-10-01 15:17:58.973900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:00.508 [2024-10-01 15:17:58.973922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:00.508 [2024-10-01 15:17:58.973937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.508 [2024-10-01 15:17:58.974009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.508 [2024-10-01 15:17:58.974030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.508 [2024-10-01 15:17:58.974050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:00.508 [2024-10-01 15:17:58.974063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.508 [2024-10-01 15:17:58.974087] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:00.508 [2024-10-01 15:17:58.974494] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:00.508 [2024-10-01 15:17:58.974529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.508 [2024-10-01 15:17:58.974551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.508 [2024-10-01 15:17:58.974571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:18:00.508 [2024-10-01 15:17:58.974583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.508 [2024-10-01 15:17:58.974710] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4ab08a86-cd27-4318-8bd2-6445a819cf80 00:18:00.508 [2024-10-01 15:17:58.976273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.508 [2024-10-01 15:17:58.976314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:00.508 [2024-10-01 15:17:58.976329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:00.508 [2024-10-01 15:17:58.976345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.508 [2024-10-01 15:17:58.984286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.508 [2024-10-01 15:17:58.984340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.508 [2024-10-01 15:17:58.984355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.909 ms 00:18:00.508 [2024-10-01 15:17:58.984373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.508 [2024-10-01 15:17:58.984475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.508 [2024-10-01 15:17:58.984492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.508 [2024-10-01 15:17:58.984506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:00.508 [2024-10-01 15:17:58.984522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.508 [2024-10-01 15:17:58.984618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.508 [2024-10-01 15:17:58.984640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:00.508 [2024-10-01 15:17:58.984653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:00.508 [2024-10-01 15:17:58.984670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.508 [2024-10-01 15:17:58.984703] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.508 [2024-10-01 15:17:58.986673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.508 [2024-10-01 15:17:58.986722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.508 [2024-10-01 15:17:58.986739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:18:00.508 [2024-10-01 15:17:58.986751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.508 [2024-10-01 15:17:58.986797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.508 [2024-10-01 15:17:58.986809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:00.508 [2024-10-01 15:17:58.986827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:00.508 [2024-10-01 15:17:58.986846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.508 [2024-10-01 15:17:58.986870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:00.508 [2024-10-01 15:17:58.987040] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:00.508 [2024-10-01 15:17:58.987070] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:00.508 [2024-10-01 15:17:58.987091] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:00.508 [2024-10-01 15:17:58.987109] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:00.508 [2024-10-01 15:17:58.987124] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:00.508 [2024-10-01 15:17:58.987139] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:00.508 [2024-10-01 15:17:58.987150] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:00.508 [2024-10-01 15:17:58.987164] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:00.508 [2024-10-01 15:17:58.987175] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:00.508 [2024-10-01 15:17:58.987208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.508 [2024-10-01 15:17:58.987219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:00.508 [2024-10-01 15:17:58.987237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:18:00.508 [2024-10-01 15:17:58.987255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.509 [2024-10-01 15:17:58.987337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.509 [2024-10-01 15:17:58.987355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:00.509 [2024-10-01 15:17:58.987370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:00.509 [2024-10-01 15:17:58.987388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.509 [2024-10-01 15:17:58.987477] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:00.509 [2024-10-01 15:17:58.987505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:00.509 [2024-10-01 15:17:58.987519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.509 [2024-10-01 15:17:58.987533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.509 [2024-10-01 15:17:58.987547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:00.509 [2024-10-01 15:17:58.987557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:00.509 [2024-10-01 15:17:58.987570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:00.509 [2024-10-01 15:17:58.987580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:00.509 [2024-10-01 15:17:58.987593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:00.509 [2024-10-01 15:17:58.987605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.509 [2024-10-01 15:17:58.987618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:00.509 [2024-10-01 15:17:58.987627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:00.509 [2024-10-01 15:17:58.987643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.509 [2024-10-01 15:17:58.987653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:00.509 [2024-10-01 15:17:58.987666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:00.509 [2024-10-01 15:17:58.987708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.509 [2024-10-01 15:17:58.987721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:00.509 [2024-10-01 15:17:58.987731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:00.509 [2024-10-01 15:17:58.987744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.509 [2024-10-01 15:17:58.987754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:00.509 [2024-10-01 15:17:58.987768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:00.509 [2024-10-01 15:17:58.987777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.509 [2024-10-01 15:17:58.987790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:00.509 [2024-10-01 15:17:58.987799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:00.509 [2024-10-01 15:17:58.987811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.509 [2024-10-01 15:17:58.987821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:00.509 [2024-10-01 15:17:58.987833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:00.509 [2024-10-01 15:17:58.987842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.509 [2024-10-01 15:17:58.987858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:00.509 [2024-10-01 15:17:58.987867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:00.509 [2024-10-01 15:17:58.987879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.509 [2024-10-01 15:17:58.987889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:00.509 [2024-10-01 15:17:58.987901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:00.509 [2024-10-01 15:17:58.987910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.509 [2024-10-01 15:17:58.987922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:00.509 [2024-10-01 15:17:58.987932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:00.509 [2024-10-01 15:17:58.987944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.509 [2024-10-01 15:17:58.987970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:00.509 [2024-10-01 15:17:58.987982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:00.509 [2024-10-01 15:17:58.987992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.509 [2024-10-01 15:17:58.988005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:00.509 [2024-10-01 15:17:58.988016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:00.509 [2024-10-01 15:17:58.988030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.509 [2024-10-01 15:17:58.988042] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:00.509 [2024-10-01 15:17:58.988059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:00.509 [2024-10-01 15:17:58.988070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.509 [2024-10-01 15:17:58.988085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.509 [2024-10-01 15:17:58.988097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:00.509 [2024-10-01 15:17:58.988110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:00.509 [2024-10-01 15:17:58.988120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:00.509 [2024-10-01 15:17:58.988133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:00.509 [2024-10-01 15:17:58.988142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:00.509 [2024-10-01 15:17:58.988155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:00.509 [2024-10-01 15:17:58.988170] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:00.509 [2024-10-01 15:17:58.988207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.509 [2024-10-01 15:17:58.988220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:00.509 [2024-10-01 15:17:58.988234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:00.509 [2024-10-01 15:17:58.988245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:00.509 [2024-10-01 15:17:58.988258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:00.509 [2024-10-01 15:17:58.988270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:00.509 [2024-10-01 15:17:58.988286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:00.509 [2024-10-01 15:17:58.988297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:00.509 [2024-10-01 15:17:58.988311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:00.509 [2024-10-01 15:17:58.988322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:00.509 [2024-10-01 15:17:58.988336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:00.509 [2024-10-01 15:17:58.988347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:00.509 [2024-10-01 15:17:58.988361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:00.509 [2024-10-01 15:17:58.988372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:00.509 [2024-10-01 15:17:58.988386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:00.509 [2024-10-01 15:17:58.988398] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:00.509 [2024-10-01 15:17:58.988415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.509 [2024-10-01 15:17:58.988427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:00.509 [2024-10-01 15:17:58.988440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:00.509 [2024-10-01 15:17:58.988457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:00.509 [2024-10-01 15:17:58.988471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:00.509 [2024-10-01 15:17:58.988483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.509 [2024-10-01 15:17:58.988500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:00.509 [2024-10-01 15:17:58.988514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:18:00.509 [2024-10-01 15:17:58.988528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.509 [2024-10-01 15:17:58.988599] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:00.509 [2024-10-01 15:17:58.988617] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:04.752 [2024-10-01 15:18:03.039795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.039875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:04.752 [2024-10-01 15:18:03.039893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4057.773 ms 00:18:04.752 [2024-10-01 15:18:03.039909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.063248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.063347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:04.752 [2024-10-01 15:18:03.063370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.254 ms 00:18:04.752 [2024-10-01 15:18:03.063394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.063581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.063605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:04.752 [2024-10-01 15:18:03.063621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:04.752 [2024-10-01 15:18:03.063640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.075833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.075902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:04.752 [2024-10-01 15:18:03.075918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.091 ms 00:18:04.752 [2024-10-01 15:18:03.075950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.076008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.076025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:04.752 [2024-10-01 15:18:03.076037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:04.752 [2024-10-01 15:18:03.076053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.076586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.076615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:04.752 [2024-10-01 15:18:03.076628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:18:04.752 [2024-10-01 15:18:03.076645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.076781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.076799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:04.752 [2024-10-01 15:18:03.076811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:04.752 [2024-10-01 15:18:03.076824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.083165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.083245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:04.752 [2024-10-01 15:18:03.083262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.329 ms 00:18:04.752 [2024-10-01 15:18:03.083276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.093661] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:04.752 [2024-10-01 15:18:03.100141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.100215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:04.752 [2024-10-01 15:18:03.100236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.776 ms 00:18:04.752 [2024-10-01 15:18:03.100249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.187243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.187331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:04.752 [2024-10-01 15:18:03.187362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.063 ms 00:18:04.752 [2024-10-01 15:18:03.187379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.187596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.187628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:04.752 [2024-10-01 15:18:03.187657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:18:04.752 [2024-10-01 15:18:03.187684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.192093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.192152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:04.752 [2024-10-01 15:18:03.192182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.382 ms 00:18:04.752 [2024-10-01 15:18:03.192195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.195455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.195504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:04.752 [2024-10-01 15:18:03.195522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:18:04.752 [2024-10-01 15:18:03.195533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.195850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.752 [2024-10-01 15:18:03.195875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:04.752 [2024-10-01 15:18:03.195894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:04.752 [2024-10-01 15:18:03.195906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.752 [2024-10-01 15:18:03.239822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.753 [2024-10-01 15:18:03.239904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:04.753 [2024-10-01 15:18:03.239934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.948 ms 00:18:04.753 [2024-10-01 15:18:03.239953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.753 [2024-10-01 15:18:03.245341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.753 [2024-10-01 15:18:03.245407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:04.753 [2024-10-01 15:18:03.245430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.306 ms 00:18:04.753 [2024-10-01 15:18:03.245442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.753 [2024-10-01 15:18:03.249819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.753 [2024-10-01 15:18:03.249884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:04.753 [2024-10-01 15:18:03.249903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.333 ms 00:18:04.753 [2024-10-01 15:18:03.249914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.753 [2024-10-01 15:18:03.254316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.753 [2024-10-01 15:18:03.254369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:04.753 [2024-10-01 15:18:03.254391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.344 ms 00:18:04.753 [2024-10-01 15:18:03.254403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.753 [2024-10-01 15:18:03.254451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.753 [2024-10-01 15:18:03.254477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:04.753 [2024-10-01 15:18:03.254497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:04.753 [2024-10-01 15:18:03.254508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.753 [2024-10-01 15:18:03.254592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.753 [2024-10-01 15:18:03.254607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:04.753 [2024-10-01 15:18:03.254623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:04.753 [2024-10-01 15:18:03.254634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.753 [2024-10-01 15:18:03.255845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4288.514 ms, result 0 00:18:04.753 { 00:18:04.753 "name": "ftl0", 00:18:04.753 "uuid": "4ab08a86-cd27-4318-8bd2-6445a819cf80" 00:18:04.753 } 00:18:04.753 15:18:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:04.753 15:18:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:04.753 15:18:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:05.012 15:18:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:05.272 [2024-10-01 15:18:03.633893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:05.272 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:05.272 Zero copy mechanism will not be used. 00:18:05.272 Running I/O for 4 seconds... 00:18:09.159 1660.00 IOPS, 110.23 MiB/s 1679.50 IOPS, 111.53 MiB/s 1667.33 IOPS, 110.72 MiB/s 1689.75 IOPS, 112.21 MiB/s 00:18:09.159 Latency(us) 00:18:09.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.159 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:09.159 ftl0 : 4.00 1688.91 112.15 0.00 0.00 619.52 225.36 3158.36 00:18:09.159 =================================================================================================================== 00:18:09.159 Total : 1688.91 112.15 0.00 0.00 619.52 225.36 3158.36 00:18:09.159 [2024-10-01 15:18:07.636963] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:09.159 { 00:18:09.159 "results": [ 00:18:09.159 { 00:18:09.159 "job": "ftl0", 00:18:09.159 "core_mask": "0x1", 00:18:09.159 "workload": "randwrite", 00:18:09.159 "status": "finished", 00:18:09.159 "queue_depth": 1, 00:18:09.159 "io_size": 69632, 00:18:09.159 "runtime": 4.002581, 00:18:09.159 "iops": 1688.910230673658, 00:18:09.159 "mibps": 112.15419500567259, 00:18:09.159 "io_failed": 0, 00:18:09.159 "io_timeout": 0, 00:18:09.159 "avg_latency_us": 619.5202433402247, 00:18:09.159 "min_latency_us": 225.36224899598395, 00:18:09.159 "max_latency_us": 3158.3614457831327 00:18:09.159 } 00:18:09.159 ], 00:18:09.159 "core_count": 1 00:18:09.159 } 00:18:09.423 15:18:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:09.423 [2024-10-01 15:18:07.770239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:09.423 Running I/O for 4 seconds... 00:18:13.602 9702.00 IOPS, 37.90 MiB/s 9727.50 IOPS, 38.00 MiB/s 9863.67 IOPS, 38.53 MiB/s 9864.75 IOPS, 38.53 MiB/s 00:18:13.602 Latency(us) 00:18:13.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.602 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.602 ftl0 : 4.02 9856.91 38.50 0.00 0.00 12959.40 246.75 33899.75 00:18:13.602 =================================================================================================================== 00:18:13.602 Total : 9856.91 38.50 0.00 0.00 12959.40 0.00 33899.75 00:18:13.602 [2024-10-01 15:18:11.786532] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:13.602 { 00:18:13.602 "results": [ 00:18:13.602 { 00:18:13.602 "job": "ftl0", 00:18:13.602 "core_mask": "0x1", 00:18:13.602 "workload": "randwrite", 00:18:13.602 "status": "finished", 00:18:13.602 "queue_depth": 128, 00:18:13.602 "io_size": 4096, 00:18:13.602 "runtime": 4.015965, 00:18:13.602 "iops": 9856.908613496382, 00:18:13.602 "mibps": 38.50354927147024, 00:18:13.602 "io_failed": 0, 00:18:13.602 "io_timeout": 0, 00:18:13.602 "avg_latency_us": 12959.399645052357, 00:18:13.602 "min_latency_us": 246.74698795180723, 00:18:13.602 "max_latency_us": 33899.74618473896 00:18:13.602 } 00:18:13.602 ], 00:18:13.602 "core_count": 1 00:18:13.602 } 00:18:13.602 15:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:13.602 [2024-10-01 15:18:11.906451] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:13.602 Running I/O for 4 seconds... 00:18:17.386 8487.00 IOPS, 33.15 MiB/s 8396.50 IOPS, 32.80 MiB/s 8544.33 IOPS, 33.38 MiB/s 8634.75 IOPS, 33.73 MiB/s 00:18:17.386 Latency(us) 00:18:17.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.386 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:17.386 Verification LBA range: start 0x0 length 0x1400000 00:18:17.386 ftl0 : 4.01 8645.51 33.77 0.00 0.00 14760.48 269.78 31162.50 00:18:17.386 =================================================================================================================== 00:18:17.386 Total : 8645.51 33.77 0.00 0.00 14760.48 0.00 31162.50 00:18:17.386 [2024-10-01 15:18:15.916904] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:17.386 { 00:18:17.386 "results": [ 00:18:17.386 { 00:18:17.386 "job": "ftl0", 00:18:17.386 "core_mask": "0x1", 00:18:17.386 "workload": "verify", 00:18:17.386 "status": "finished", 00:18:17.386 "verify_range": { 00:18:17.386 "start": 0, 00:18:17.386 "length": 20971520 00:18:17.386 }, 00:18:17.386 "queue_depth": 128, 00:18:17.386 "io_size": 4096, 00:18:17.386 "runtime": 4.00971, 00:18:17.386 "iops": 8645.513017150866, 00:18:17.386 "mibps": 33.77153522324557, 00:18:17.386 "io_failed": 0, 00:18:17.386 "io_timeout": 0, 00:18:17.386 "avg_latency_us": 14760.480861517955, 00:18:17.386 "min_latency_us": 269.7767068273092, 00:18:17.386 "max_latency_us": 31162.499598393573 00:18:17.386 } 00:18:17.386 ], 00:18:17.386 "core_count": 1 00:18:17.386 } 00:18:17.645 15:18:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:17.645 [2024-10-01 15:18:16.118751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.645 [2024-10-01 15:18:16.118812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:17.645 [2024-10-01 15:18:16.118832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:17.645 [2024-10-01 15:18:16.118843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.645 [2024-10-01 15:18:16.118872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:17.645 [2024-10-01 15:18:16.119538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.645 [2024-10-01 15:18:16.119564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:17.645 [2024-10-01 15:18:16.119579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:18:17.645 [2024-10-01 15:18:16.119591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.645 [2024-10-01 15:18:16.121171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.645 [2024-10-01 15:18:16.121234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:17.645 [2024-10-01 15:18:16.121248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.561 ms 00:18:17.645 [2024-10-01 15:18:16.121265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.905 [2024-10-01 15:18:16.328156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.905 [2024-10-01 15:18:16.328264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:17.905 [2024-10-01 15:18:16.328283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 207.191 ms 00:18:17.905 [2024-10-01 15:18:16.328297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.905 [2024-10-01 15:18:16.333333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.905 [2024-10-01 15:18:16.333371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:17.905 [2024-10-01 15:18:16.333384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.998 ms 00:18:17.905 [2024-10-01 15:18:16.333398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.905 [2024-10-01 15:18:16.335097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.905 [2024-10-01 15:18:16.335142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:17.905 [2024-10-01 15:18:16.335156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:18:17.905 [2024-10-01 15:18:16.335182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.905 [2024-10-01 15:18:16.339708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.905 [2024-10-01 15:18:16.339755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:17.905 [2024-10-01 15:18:16.339772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.501 ms 00:18:17.905 [2024-10-01 15:18:16.339789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.905 [2024-10-01 15:18:16.339887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.905 [2024-10-01 15:18:16.339903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:17.905 [2024-10-01 15:18:16.339915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:17.905 [2024-10-01 15:18:16.339927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.905 [2024-10-01 15:18:16.341677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.905 [2024-10-01 15:18:16.341716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:17.905 [2024-10-01 15:18:16.341728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:18:17.905 [2024-10-01 15:18:16.341740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.905 [2024-10-01 15:18:16.343113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.905 [2024-10-01 15:18:16.343153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:17.905 [2024-10-01 15:18:16.343165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:18:17.905 [2024-10-01 15:18:16.343193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.905 [2024-10-01 15:18:16.344384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.905 [2024-10-01 15:18:16.344424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:17.905 [2024-10-01 15:18:16.344435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:18:17.905 [2024-10-01 15:18:16.344451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.905 [2024-10-01 15:18:16.345650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.906 [2024-10-01 15:18:16.345689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:17.906 [2024-10-01 15:18:16.345701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:18:17.906 [2024-10-01 15:18:16.345713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.906 [2024-10-01 15:18:16.345742] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:17.906 [2024-10-01 15:18:16.345781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.345992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:17.906 [2024-10-01 15:18:16.346869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.346880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.346894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.346905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.346921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.346931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.346944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.346955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.346968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.346978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.346992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.347003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.347017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.347027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:17.907 [2024-10-01 15:18:16.347048] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:17.907 [2024-10-01 15:18:16.347058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ab08a86-cd27-4318-8bd2-6445a819cf80 00:18:17.907 [2024-10-01 15:18:16.347076] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:17.907 [2024-10-01 15:18:16.347103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:17.907 [2024-10-01 15:18:16.347116] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:17.907 [2024-10-01 15:18:16.347126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:17.907 [2024-10-01 15:18:16.347142] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:17.907 [2024-10-01 15:18:16.347152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:17.907 [2024-10-01 15:18:16.347166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:17.907 [2024-10-01 15:18:16.347185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:17.907 [2024-10-01 15:18:16.347196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:17.907 [2024-10-01 15:18:16.347207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.907 [2024-10-01 15:18:16.347219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:17.907 [2024-10-01 15:18:16.347235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:18:17.907 [2024-10-01 15:18:16.347248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.349015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.907 [2024-10-01 15:18:16.349041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:17.907 [2024-10-01 15:18:16.349053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:18:17.907 [2024-10-01 15:18:16.349065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.349166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.907 [2024-10-01 15:18:16.349195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:17.907 [2024-10-01 15:18:16.349207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:17.907 [2024-10-01 15:18:16.349232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.355432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.355460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.907 [2024-10-01 15:18:16.355471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.355485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.355554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.355573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.907 [2024-10-01 15:18:16.355584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.355599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.355689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.355706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.907 [2024-10-01 15:18:16.355717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.355729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.355747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.355761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.907 [2024-10-01 15:18:16.355771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.355786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.368329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.368385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.907 [2024-10-01 15:18:16.368398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.368412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.377733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.377789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.907 [2024-10-01 15:18:16.377802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.377815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.377899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.377914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.907 [2024-10-01 15:18:16.377925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.377938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.377972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.377988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.907 [2024-10-01 15:18:16.377998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.378016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.378098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.378119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.907 [2024-10-01 15:18:16.378130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.378143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.378449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.378503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:17.907 [2024-10-01 15:18:16.378536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.378569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.378639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.378738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.907 [2024-10-01 15:18:16.378775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.378809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.378905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.907 [2024-10-01 15:18:16.379047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.907 [2024-10-01 15:18:16.379062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.907 [2024-10-01 15:18:16.379079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.907 [2024-10-01 15:18:16.379236] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 260.859 ms, result 0 00:18:17.907 true 00:18:17.907 15:18:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 86319 00:18:17.907 15:18:16 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 86319 ']' 00:18:17.907 15:18:16 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 86319 00:18:17.907 15:18:16 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:18:17.907 15:18:16 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.907 15:18:16 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86319 00:18:18.166 killing process with pid 86319 00:18:18.166 Received shutdown signal, test time was about 4.000000 seconds 00:18:18.166 00:18:18.166 Latency(us) 00:18:18.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.166 =================================================================================================================== 00:18:18.166 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.166 15:18:16 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.166 15:18:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.166 15:18:16 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86319' 00:18:18.166 15:18:16 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 86319 00:18:18.166 15:18:16 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 86319 00:18:21.454 15:18:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:21.454 15:18:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:21.454 Remove shared memory files 00:18:21.454 15:18:19 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:21.454 15:18:19 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:21.454 15:18:19 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:21.454 15:18:19 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:21.454 15:18:19 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:21.454 15:18:19 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:21.454 ************************************ 00:18:21.454 END TEST ftl_bdevperf 00:18:21.454 ************************************ 00:18:21.454 00:18:21.454 real 0m25.097s 00:18:21.454 user 0m27.994s 00:18:21.454 sys 0m1.506s 00:18:21.454 15:18:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:21.454 15:18:19 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:21.454 15:18:19 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:21.454 15:18:19 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:21.454 15:18:19 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.454 15:18:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:21.454 ************************************ 00:18:21.454 START TEST ftl_trim 00:18:21.454 ************************************ 00:18:21.454 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:21.454 * Looking for test storage... 00:18:21.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.454 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:21.454 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:18:21.454 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:21.454 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:21.454 15:18:19 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:21.454 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:21.454 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:21.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.454 --rc genhtml_branch_coverage=1 00:18:21.454 --rc genhtml_function_coverage=1 00:18:21.454 --rc genhtml_legend=1 00:18:21.454 --rc geninfo_all_blocks=1 00:18:21.454 --rc geninfo_unexecuted_blocks=1 00:18:21.454 00:18:21.454 ' 00:18:21.454 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:21.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.454 --rc genhtml_branch_coverage=1 00:18:21.454 --rc genhtml_function_coverage=1 00:18:21.454 --rc genhtml_legend=1 00:18:21.454 --rc geninfo_all_blocks=1 00:18:21.454 --rc geninfo_unexecuted_blocks=1 00:18:21.454 00:18:21.454 ' 00:18:21.454 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:21.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.454 --rc genhtml_branch_coverage=1 00:18:21.454 --rc genhtml_function_coverage=1 00:18:21.454 --rc genhtml_legend=1 00:18:21.454 --rc geninfo_all_blocks=1 00:18:21.454 --rc geninfo_unexecuted_blocks=1 00:18:21.454 00:18:21.454 ' 00:18:21.454 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:21.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.454 --rc genhtml_branch_coverage=1 00:18:21.454 --rc genhtml_function_coverage=1 00:18:21.454 --rc genhtml_legend=1 00:18:21.454 --rc geninfo_all_blocks=1 00:18:21.454 --rc geninfo_unexecuted_blocks=1 00:18:21.454 00:18:21.454 ' 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.454 15:18:19 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=86679 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 86679 00:18:21.455 15:18:19 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:21.455 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86679 ']' 00:18:21.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.455 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.455 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:21.455 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.455 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:21.455 15:18:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:21.455 [2024-10-01 15:18:19.796096] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:21.455 [2024-10-01 15:18:19.796281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86679 ] 00:18:21.455 [2024-10-01 15:18:19.971196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:21.715 [2024-10-01 15:18:20.022067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.715 [2024-10-01 15:18:20.022164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.715 [2024-10-01 15:18:20.022315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.283 15:18:20 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:22.283 15:18:20 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:22.284 15:18:20 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:22.284 15:18:20 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:22.284 15:18:20 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:22.284 15:18:20 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:22.284 15:18:20 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:22.284 15:18:20 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:22.594 15:18:20 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:22.594 15:18:20 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:22.594 15:18:20 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:22.594 15:18:20 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:22.594 15:18:20 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:22.594 15:18:20 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:22.594 15:18:20 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:22.594 15:18:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:22.594 15:18:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:22.594 { 00:18:22.594 "name": "nvme0n1", 00:18:22.594 "aliases": [ 00:18:22.594 "311d8b1f-28d8-4772-8f3b-43a2f7798a6e" 00:18:22.594 ], 00:18:22.594 "product_name": "NVMe disk", 00:18:22.594 "block_size": 4096, 00:18:22.594 "num_blocks": 1310720, 00:18:22.594 "uuid": "311d8b1f-28d8-4772-8f3b-43a2f7798a6e", 00:18:22.594 "numa_id": -1, 00:18:22.594 "assigned_rate_limits": { 00:18:22.594 "rw_ios_per_sec": 0, 00:18:22.594 "rw_mbytes_per_sec": 0, 00:18:22.594 "r_mbytes_per_sec": 0, 00:18:22.594 "w_mbytes_per_sec": 0 00:18:22.594 }, 00:18:22.594 "claimed": true, 00:18:22.594 "claim_type": "read_many_write_one", 00:18:22.594 "zoned": false, 00:18:22.594 "supported_io_types": { 00:18:22.594 "read": true, 00:18:22.594 "write": true, 00:18:22.594 "unmap": true, 00:18:22.594 "flush": true, 00:18:22.594 "reset": true, 00:18:22.594 "nvme_admin": true, 00:18:22.594 "nvme_io": true, 00:18:22.594 "nvme_io_md": false, 00:18:22.594 "write_zeroes": true, 00:18:22.594 "zcopy": false, 00:18:22.594 "get_zone_info": false, 00:18:22.594 "zone_management": false, 00:18:22.594 "zone_append": false, 00:18:22.594 "compare": true, 00:18:22.594 "compare_and_write": false, 00:18:22.594 "abort": true, 00:18:22.594 "seek_hole": false, 00:18:22.594 "seek_data": false, 00:18:22.594 "copy": true, 00:18:22.594 "nvme_iov_md": false 00:18:22.594 }, 00:18:22.594 "driver_specific": { 00:18:22.594 "nvme": [ 00:18:22.594 { 00:18:22.594 "pci_address": "0000:00:11.0", 00:18:22.594 "trid": { 00:18:22.594 "trtype": "PCIe", 00:18:22.594 "traddr": "0000:00:11.0" 00:18:22.594 }, 00:18:22.594 "ctrlr_data": { 00:18:22.594 "cntlid": 0, 00:18:22.594 "vendor_id": "0x1b36", 00:18:22.594 "model_number": "QEMU NVMe Ctrl", 00:18:22.594 "serial_number": "12341", 00:18:22.594 "firmware_revision": "8.0.0", 00:18:22.594 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:22.594 "oacs": { 00:18:22.594 "security": 0, 00:18:22.594 "format": 1, 00:18:22.594 "firmware": 0, 00:18:22.594 "ns_manage": 1 00:18:22.594 }, 00:18:22.594 "multi_ctrlr": false, 00:18:22.594 "ana_reporting": false 00:18:22.594 }, 00:18:22.594 "vs": { 00:18:22.594 "nvme_version": "1.4" 00:18:22.594 }, 00:18:22.594 "ns_data": { 00:18:22.594 "id": 1, 00:18:22.594 "can_share": false 00:18:22.594 } 00:18:22.594 } 00:18:22.594 ], 00:18:22.594 "mp_policy": "active_passive" 00:18:22.594 } 00:18:22.594 } 00:18:22.594 ]' 00:18:22.594 15:18:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:22.853 15:18:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:22.853 15:18:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:22.853 15:18:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:22.853 15:18:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:22.853 15:18:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:22.853 15:18:21 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:22.853 15:18:21 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:22.853 15:18:21 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:22.853 15:18:21 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:22.853 15:18:21 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:23.112 15:18:21 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c9dadc9b-67ea-41c3-9043-06aab3add9f2 00:18:23.112 15:18:21 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:23.112 15:18:21 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9dadc9b-67ea-41c3-9043-06aab3add9f2 00:18:23.371 15:18:21 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:23.371 15:18:21 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=87165344-2743-40ea-934b-bffc6c5a5297 00:18:23.371 15:18:21 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 87165344-2743-40ea-934b-bffc6c5a5297 00:18:23.630 15:18:22 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:23.631 15:18:22 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:23.631 15:18:22 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:23.631 15:18:22 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:23.631 15:18:22 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:23.631 15:18:22 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:23.631 15:18:22 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:23.631 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:23.631 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:23.631 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:23.631 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:23.631 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:23.889 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:23.889 { 00:18:23.889 "name": "9697dd42-e934-4c27-b583-59a9cdd4338e", 00:18:23.889 "aliases": [ 00:18:23.889 "lvs/nvme0n1p0" 00:18:23.889 ], 00:18:23.889 "product_name": "Logical Volume", 00:18:23.889 "block_size": 4096, 00:18:23.889 "num_blocks": 26476544, 00:18:23.889 "uuid": "9697dd42-e934-4c27-b583-59a9cdd4338e", 00:18:23.889 "assigned_rate_limits": { 00:18:23.889 "rw_ios_per_sec": 0, 00:18:23.889 "rw_mbytes_per_sec": 0, 00:18:23.889 "r_mbytes_per_sec": 0, 00:18:23.889 "w_mbytes_per_sec": 0 00:18:23.889 }, 00:18:23.889 "claimed": false, 00:18:23.889 "zoned": false, 00:18:23.889 "supported_io_types": { 00:18:23.889 "read": true, 00:18:23.889 "write": true, 00:18:23.890 "unmap": true, 00:18:23.890 "flush": false, 00:18:23.890 "reset": true, 00:18:23.890 "nvme_admin": false, 00:18:23.890 "nvme_io": false, 00:18:23.890 "nvme_io_md": false, 00:18:23.890 "write_zeroes": true, 00:18:23.890 "zcopy": false, 00:18:23.890 "get_zone_info": false, 00:18:23.890 "zone_management": false, 00:18:23.890 "zone_append": false, 00:18:23.890 "compare": false, 00:18:23.890 "compare_and_write": false, 00:18:23.890 "abort": false, 00:18:23.890 "seek_hole": true, 00:18:23.890 "seek_data": true, 00:18:23.890 "copy": false, 00:18:23.890 "nvme_iov_md": false 00:18:23.890 }, 00:18:23.890 "driver_specific": { 00:18:23.890 "lvol": { 00:18:23.890 "lvol_store_uuid": "87165344-2743-40ea-934b-bffc6c5a5297", 00:18:23.890 "base_bdev": "nvme0n1", 00:18:23.890 "thin_provision": true, 00:18:23.890 "num_allocated_clusters": 0, 00:18:23.890 "snapshot": false, 00:18:23.890 "clone": false, 00:18:23.890 "esnap_clone": false 00:18:23.890 } 00:18:23.890 } 00:18:23.890 } 00:18:23.890 ]' 00:18:23.890 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:23.890 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:23.890 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:24.149 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:24.149 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:24.149 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:24.149 15:18:22 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:24.149 15:18:22 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:24.149 15:18:22 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:24.408 15:18:22 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:24.408 15:18:22 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:24.408 15:18:22 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:24.408 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:24.408 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:24.408 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:24.408 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:24.408 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:24.408 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:24.408 { 00:18:24.408 "name": "9697dd42-e934-4c27-b583-59a9cdd4338e", 00:18:24.408 "aliases": [ 00:18:24.408 "lvs/nvme0n1p0" 00:18:24.408 ], 00:18:24.408 "product_name": "Logical Volume", 00:18:24.408 "block_size": 4096, 00:18:24.408 "num_blocks": 26476544, 00:18:24.408 "uuid": "9697dd42-e934-4c27-b583-59a9cdd4338e", 00:18:24.408 "assigned_rate_limits": { 00:18:24.408 "rw_ios_per_sec": 0, 00:18:24.408 "rw_mbytes_per_sec": 0, 00:18:24.408 "r_mbytes_per_sec": 0, 00:18:24.408 "w_mbytes_per_sec": 0 00:18:24.408 }, 00:18:24.408 "claimed": false, 00:18:24.408 "zoned": false, 00:18:24.408 "supported_io_types": { 00:18:24.408 "read": true, 00:18:24.408 "write": true, 00:18:24.408 "unmap": true, 00:18:24.408 "flush": false, 00:18:24.408 "reset": true, 00:18:24.408 "nvme_admin": false, 00:18:24.408 "nvme_io": false, 00:18:24.408 "nvme_io_md": false, 00:18:24.408 "write_zeroes": true, 00:18:24.408 "zcopy": false, 00:18:24.408 "get_zone_info": false, 00:18:24.408 "zone_management": false, 00:18:24.408 "zone_append": false, 00:18:24.408 "compare": false, 00:18:24.408 "compare_and_write": false, 00:18:24.408 "abort": false, 00:18:24.408 "seek_hole": true, 00:18:24.408 "seek_data": true, 00:18:24.408 "copy": false, 00:18:24.408 "nvme_iov_md": false 00:18:24.408 }, 00:18:24.408 "driver_specific": { 00:18:24.408 "lvol": { 00:18:24.408 "lvol_store_uuid": "87165344-2743-40ea-934b-bffc6c5a5297", 00:18:24.408 "base_bdev": "nvme0n1", 00:18:24.408 "thin_provision": true, 00:18:24.408 "num_allocated_clusters": 0, 00:18:24.408 "snapshot": false, 00:18:24.408 "clone": false, 00:18:24.408 "esnap_clone": false 00:18:24.408 } 00:18:24.408 } 00:18:24.408 } 00:18:24.408 ]' 00:18:24.408 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:24.666 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:24.666 15:18:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:24.666 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:24.666 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:24.666 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:24.666 15:18:23 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:24.666 15:18:23 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:24.925 15:18:23 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:24.925 15:18:23 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:24.925 15:18:23 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:24.925 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:24.925 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:24.925 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:24.925 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:24.925 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9697dd42-e934-4c27-b583-59a9cdd4338e 00:18:24.925 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:24.925 { 00:18:24.925 "name": "9697dd42-e934-4c27-b583-59a9cdd4338e", 00:18:24.925 "aliases": [ 00:18:24.925 "lvs/nvme0n1p0" 00:18:24.925 ], 00:18:24.925 "product_name": "Logical Volume", 00:18:24.925 "block_size": 4096, 00:18:24.925 "num_blocks": 26476544, 00:18:24.925 "uuid": "9697dd42-e934-4c27-b583-59a9cdd4338e", 00:18:24.925 "assigned_rate_limits": { 00:18:24.925 "rw_ios_per_sec": 0, 00:18:24.925 "rw_mbytes_per_sec": 0, 00:18:24.925 "r_mbytes_per_sec": 0, 00:18:24.925 "w_mbytes_per_sec": 0 00:18:24.925 }, 00:18:24.925 "claimed": false, 00:18:24.925 "zoned": false, 00:18:24.925 "supported_io_types": { 00:18:24.925 "read": true, 00:18:24.925 "write": true, 00:18:24.925 "unmap": true, 00:18:24.925 "flush": false, 00:18:24.925 "reset": true, 00:18:24.925 "nvme_admin": false, 00:18:24.925 "nvme_io": false, 00:18:24.925 "nvme_io_md": false, 00:18:24.925 "write_zeroes": true, 00:18:24.925 "zcopy": false, 00:18:24.925 "get_zone_info": false, 00:18:24.925 "zone_management": false, 00:18:24.925 "zone_append": false, 00:18:24.925 "compare": false, 00:18:24.925 "compare_and_write": false, 00:18:24.925 "abort": false, 00:18:24.925 "seek_hole": true, 00:18:24.925 "seek_data": true, 00:18:24.925 "copy": false, 00:18:24.925 "nvme_iov_md": false 00:18:24.925 }, 00:18:24.925 "driver_specific": { 00:18:24.925 "lvol": { 00:18:24.925 "lvol_store_uuid": "87165344-2743-40ea-934b-bffc6c5a5297", 00:18:24.925 "base_bdev": "nvme0n1", 00:18:24.925 "thin_provision": true, 00:18:24.925 "num_allocated_clusters": 0, 00:18:24.925 "snapshot": false, 00:18:24.925 "clone": false, 00:18:24.925 "esnap_clone": false 00:18:24.925 } 00:18:24.925 } 00:18:24.925 } 00:18:24.925 ]' 00:18:24.925 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:25.184 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:25.184 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:25.184 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:25.184 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:25.184 15:18:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:25.184 15:18:23 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:25.184 15:18:23 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9697dd42-e934-4c27-b583-59a9cdd4338e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:25.444 [2024-10-01 15:18:23.755467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.755697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:25.444 [2024-10-01 15:18:23.755741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:25.444 [2024-10-01 15:18:23.755756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.758549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.758605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:25.444 [2024-10-01 15:18:23.758621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:18:25.444 [2024-10-01 15:18:23.758639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.758793] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:25.444 [2024-10-01 15:18:23.759054] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:25.444 [2024-10-01 15:18:23.759076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.759089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:25.444 [2024-10-01 15:18:23.759101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:18:25.444 [2024-10-01 15:18:23.759113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.759446] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cd477c1e-ca00-4fe0-8c49-749bfe193526 00:18:25.444 [2024-10-01 15:18:23.760960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.761102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:25.444 [2024-10-01 15:18:23.761208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:25.444 [2024-10-01 15:18:23.761248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.768970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.769149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:25.444 [2024-10-01 15:18:23.769271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.615 ms 00:18:25.444 [2024-10-01 15:18:23.769287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.769458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.769473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:25.444 [2024-10-01 15:18:23.769488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:25.444 [2024-10-01 15:18:23.769511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.769557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.769583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:25.444 [2024-10-01 15:18:23.769596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:25.444 [2024-10-01 15:18:23.769606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.769653] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:25.444 [2024-10-01 15:18:23.771531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.771567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:25.444 [2024-10-01 15:18:23.771592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:18:25.444 [2024-10-01 15:18:23.771605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.771652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.771676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:25.444 [2024-10-01 15:18:23.771687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:25.444 [2024-10-01 15:18:23.771702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.771732] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:25.444 [2024-10-01 15:18:23.771895] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:25.444 [2024-10-01 15:18:23.771917] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:25.444 [2024-10-01 15:18:23.771933] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:25.444 [2024-10-01 15:18:23.771946] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:25.444 [2024-10-01 15:18:23.771961] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:25.444 [2024-10-01 15:18:23.771973] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:25.444 [2024-10-01 15:18:23.771985] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:25.444 [2024-10-01 15:18:23.771996] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:25.444 [2024-10-01 15:18:23.772008] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:25.444 [2024-10-01 15:18:23.772019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.772032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:25.444 [2024-10-01 15:18:23.772043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:18:25.444 [2024-10-01 15:18:23.772058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.772146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.444 [2024-10-01 15:18:23.772184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:25.444 [2024-10-01 15:18:23.772195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:25.444 [2024-10-01 15:18:23.772208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.444 [2024-10-01 15:18:23.772327] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:25.444 [2024-10-01 15:18:23.772342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:25.444 [2024-10-01 15:18:23.772352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.444 [2024-10-01 15:18:23.772365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.444 [2024-10-01 15:18:23.772378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:25.444 [2024-10-01 15:18:23.772392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:25.444 [2024-10-01 15:18:23.772402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:25.444 [2024-10-01 15:18:23.772415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:25.444 [2024-10-01 15:18:23.772424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:25.444 [2024-10-01 15:18:23.772436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.444 [2024-10-01 15:18:23.772445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:25.444 [2024-10-01 15:18:23.772457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:25.444 [2024-10-01 15:18:23.772466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.444 [2024-10-01 15:18:23.772481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:25.444 [2024-10-01 15:18:23.772491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:25.444 [2024-10-01 15:18:23.772502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.444 [2024-10-01 15:18:23.772513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:25.444 [2024-10-01 15:18:23.772525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:25.444 [2024-10-01 15:18:23.772535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.444 [2024-10-01 15:18:23.772547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:25.444 [2024-10-01 15:18:23.772557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:25.444 [2024-10-01 15:18:23.772568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.444 [2024-10-01 15:18:23.772578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:25.444 [2024-10-01 15:18:23.772590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:25.444 [2024-10-01 15:18:23.772599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.444 [2024-10-01 15:18:23.772611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:25.444 [2024-10-01 15:18:23.772620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:25.444 [2024-10-01 15:18:23.772632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.444 [2024-10-01 15:18:23.772642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:25.444 [2024-10-01 15:18:23.772656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:25.444 [2024-10-01 15:18:23.772665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.444 [2024-10-01 15:18:23.772678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:25.444 [2024-10-01 15:18:23.772688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:25.444 [2024-10-01 15:18:23.772699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.444 [2024-10-01 15:18:23.772708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:25.444 [2024-10-01 15:18:23.772720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:25.445 [2024-10-01 15:18:23.772729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.445 [2024-10-01 15:18:23.772741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:25.445 [2024-10-01 15:18:23.772750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:25.445 [2024-10-01 15:18:23.772763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.445 [2024-10-01 15:18:23.772772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:25.445 [2024-10-01 15:18:23.772783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:25.445 [2024-10-01 15:18:23.772792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.445 [2024-10-01 15:18:23.772804] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:25.445 [2024-10-01 15:18:23.772814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:25.445 [2024-10-01 15:18:23.772841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.445 [2024-10-01 15:18:23.772851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.445 [2024-10-01 15:18:23.772876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:25.445 [2024-10-01 15:18:23.772887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:25.445 [2024-10-01 15:18:23.772899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:25.445 [2024-10-01 15:18:23.772908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:25.445 [2024-10-01 15:18:23.772920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:25.445 [2024-10-01 15:18:23.772929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:25.445 [2024-10-01 15:18:23.772947] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:25.445 [2024-10-01 15:18:23.772960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.445 [2024-10-01 15:18:23.772974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:25.445 [2024-10-01 15:18:23.772985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:25.445 [2024-10-01 15:18:23.772998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:25.445 [2024-10-01 15:18:23.773009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:25.445 [2024-10-01 15:18:23.773022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:25.445 [2024-10-01 15:18:23.773033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:25.445 [2024-10-01 15:18:23.773048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:25.445 [2024-10-01 15:18:23.773059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:25.445 [2024-10-01 15:18:23.773071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:25.445 [2024-10-01 15:18:23.773082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:25.445 [2024-10-01 15:18:23.773095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:25.445 [2024-10-01 15:18:23.773105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:25.445 [2024-10-01 15:18:23.773117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:25.445 [2024-10-01 15:18:23.773128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:25.445 [2024-10-01 15:18:23.773140] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:25.445 [2024-10-01 15:18:23.773152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.445 [2024-10-01 15:18:23.773166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:25.445 [2024-10-01 15:18:23.773193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:25.445 [2024-10-01 15:18:23.773206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:25.445 [2024-10-01 15:18:23.773216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:25.445 [2024-10-01 15:18:23.773237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.445 [2024-10-01 15:18:23.773248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:25.445 [2024-10-01 15:18:23.773268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:18:25.445 [2024-10-01 15:18:23.773289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.445 [2024-10-01 15:18:23.773392] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:25.445 [2024-10-01 15:18:23.773406] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:28.774 [2024-10-01 15:18:26.765299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.774 [2024-10-01 15:18:26.765367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:28.774 [2024-10-01 15:18:26.765389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2996.756 ms 00:18:28.774 [2024-10-01 15:18:26.765401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.774 [2024-10-01 15:18:26.785259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.774 [2024-10-01 15:18:26.785316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:28.774 [2024-10-01 15:18:26.785338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.762 ms 00:18:28.774 [2024-10-01 15:18:26.785349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.774 [2024-10-01 15:18:26.785563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.774 [2024-10-01 15:18:26.785581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:28.774 [2024-10-01 15:18:26.785600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:28.774 [2024-10-01 15:18:26.785613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.774 [2024-10-01 15:18:26.798001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.774 [2024-10-01 15:18:26.798051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:28.774 [2024-10-01 15:18:26.798069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.361 ms 00:18:28.774 [2024-10-01 15:18:26.798079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.774 [2024-10-01 15:18:26.798201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.774 [2024-10-01 15:18:26.798215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:28.774 [2024-10-01 15:18:26.798242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:28.774 [2024-10-01 15:18:26.798253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.774 [2024-10-01 15:18:26.798711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.774 [2024-10-01 15:18:26.798724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:28.774 [2024-10-01 15:18:26.798751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:18:28.774 [2024-10-01 15:18:26.798761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.774 [2024-10-01 15:18:26.798901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.774 [2024-10-01 15:18:26.798919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:28.775 [2024-10-01 15:18:26.798946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:28.775 [2024-10-01 15:18:26.798956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.806167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.806221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:28.775 [2024-10-01 15:18:26.806237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.182 ms 00:18:28.775 [2024-10-01 15:18:26.806248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.813980] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:28.775 [2024-10-01 15:18:26.830691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.830763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:28.775 [2024-10-01 15:18:26.830780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.368 ms 00:18:28.775 [2024-10-01 15:18:26.830793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.906622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.906698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:28.775 [2024-10-01 15:18:26.906715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.810 ms 00:18:28.775 [2024-10-01 15:18:26.906731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.906953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.906970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:28.775 [2024-10-01 15:18:26.906986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:18:28.775 [2024-10-01 15:18:26.906999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.910903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.910949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:28.775 [2024-10-01 15:18:26.910963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.863 ms 00:18:28.775 [2024-10-01 15:18:26.910977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.913853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.913894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:28.775 [2024-10-01 15:18:26.913909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.809 ms 00:18:28.775 [2024-10-01 15:18:26.913922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.914247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.914268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:28.775 [2024-10-01 15:18:26.914285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:18:28.775 [2024-10-01 15:18:26.914301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.948421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.948489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:28.775 [2024-10-01 15:18:26.948507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.133 ms 00:18:28.775 [2024-10-01 15:18:26.948537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.953251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.953307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:28.775 [2024-10-01 15:18:26.953322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.625 ms 00:18:28.775 [2024-10-01 15:18:26.953339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.956885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.956925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:28.775 [2024-10-01 15:18:26.956938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.481 ms 00:18:28.775 [2024-10-01 15:18:26.956951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.960801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.960855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:28.775 [2024-10-01 15:18:26.960869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.807 ms 00:18:28.775 [2024-10-01 15:18:26.960885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.960936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.960965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:28.775 [2024-10-01 15:18:26.960977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:28.775 [2024-10-01 15:18:26.960994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.961081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.775 [2024-10-01 15:18:26.961096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:28.775 [2024-10-01 15:18:26.961107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:28.775 [2024-10-01 15:18:26.961119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.775 [2024-10-01 15:18:26.962484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:28.775 [2024-10-01 15:18:26.963601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3211.893 ms, result 0 00:18:28.775 [2024-10-01 15:18:26.964411] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:28.775 { 00:18:28.775 "name": "ftl0", 00:18:28.775 "uuid": "cd477c1e-ca00-4fe0-8c49-749bfe193526" 00:18:28.775 } 00:18:28.775 15:18:26 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:28.775 15:18:26 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:18:28.775 15:18:26 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:28.775 15:18:26 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:18:28.775 15:18:26 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:28.775 15:18:26 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:28.775 15:18:26 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:28.775 15:18:27 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:29.035 [ 00:18:29.035 { 00:18:29.035 "name": "ftl0", 00:18:29.035 "aliases": [ 00:18:29.035 "cd477c1e-ca00-4fe0-8c49-749bfe193526" 00:18:29.035 ], 00:18:29.035 "product_name": "FTL disk", 00:18:29.035 "block_size": 4096, 00:18:29.035 "num_blocks": 23592960, 00:18:29.035 "uuid": "cd477c1e-ca00-4fe0-8c49-749bfe193526", 00:18:29.035 "assigned_rate_limits": { 00:18:29.035 "rw_ios_per_sec": 0, 00:18:29.035 "rw_mbytes_per_sec": 0, 00:18:29.035 "r_mbytes_per_sec": 0, 00:18:29.035 "w_mbytes_per_sec": 0 00:18:29.035 }, 00:18:29.035 "claimed": false, 00:18:29.035 "zoned": false, 00:18:29.035 "supported_io_types": { 00:18:29.035 "read": true, 00:18:29.035 "write": true, 00:18:29.035 "unmap": true, 00:18:29.035 "flush": true, 00:18:29.035 "reset": false, 00:18:29.035 "nvme_admin": false, 00:18:29.035 "nvme_io": false, 00:18:29.035 "nvme_io_md": false, 00:18:29.035 "write_zeroes": true, 00:18:29.035 "zcopy": false, 00:18:29.035 "get_zone_info": false, 00:18:29.035 "zone_management": false, 00:18:29.035 "zone_append": false, 00:18:29.035 "compare": false, 00:18:29.035 "compare_and_write": false, 00:18:29.035 "abort": false, 00:18:29.035 "seek_hole": false, 00:18:29.035 "seek_data": false, 00:18:29.035 "copy": false, 00:18:29.035 "nvme_iov_md": false 00:18:29.035 }, 00:18:29.035 "driver_specific": { 00:18:29.035 "ftl": { 00:18:29.035 "base_bdev": "9697dd42-e934-4c27-b583-59a9cdd4338e", 00:18:29.035 "cache": "nvc0n1p0" 00:18:29.035 } 00:18:29.035 } 00:18:29.035 } 00:18:29.035 ] 00:18:29.035 15:18:27 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:18:29.035 15:18:27 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:29.035 15:18:27 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:29.294 15:18:27 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:29.294 15:18:27 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:29.552 15:18:27 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:29.552 { 00:18:29.552 "name": "ftl0", 00:18:29.552 "aliases": [ 00:18:29.552 "cd477c1e-ca00-4fe0-8c49-749bfe193526" 00:18:29.552 ], 00:18:29.552 "product_name": "FTL disk", 00:18:29.552 "block_size": 4096, 00:18:29.552 "num_blocks": 23592960, 00:18:29.552 "uuid": "cd477c1e-ca00-4fe0-8c49-749bfe193526", 00:18:29.552 "assigned_rate_limits": { 00:18:29.552 "rw_ios_per_sec": 0, 00:18:29.552 "rw_mbytes_per_sec": 0, 00:18:29.552 "r_mbytes_per_sec": 0, 00:18:29.552 "w_mbytes_per_sec": 0 00:18:29.552 }, 00:18:29.552 "claimed": false, 00:18:29.552 "zoned": false, 00:18:29.552 "supported_io_types": { 00:18:29.552 "read": true, 00:18:29.552 "write": true, 00:18:29.552 "unmap": true, 00:18:29.552 "flush": true, 00:18:29.552 "reset": false, 00:18:29.552 "nvme_admin": false, 00:18:29.552 "nvme_io": false, 00:18:29.552 "nvme_io_md": false, 00:18:29.552 "write_zeroes": true, 00:18:29.553 "zcopy": false, 00:18:29.553 "get_zone_info": false, 00:18:29.553 "zone_management": false, 00:18:29.553 "zone_append": false, 00:18:29.553 "compare": false, 00:18:29.553 "compare_and_write": false, 00:18:29.553 "abort": false, 00:18:29.553 "seek_hole": false, 00:18:29.553 "seek_data": false, 00:18:29.553 "copy": false, 00:18:29.553 "nvme_iov_md": false 00:18:29.553 }, 00:18:29.553 "driver_specific": { 00:18:29.553 "ftl": { 00:18:29.553 "base_bdev": "9697dd42-e934-4c27-b583-59a9cdd4338e", 00:18:29.553 "cache": "nvc0n1p0" 00:18:29.553 } 00:18:29.553 } 00:18:29.553 } 00:18:29.553 ]' 00:18:29.553 15:18:27 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:29.553 15:18:27 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:29.553 15:18:27 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:29.553 [2024-10-01 15:18:28.083232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.553 [2024-10-01 15:18:28.083474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:29.553 [2024-10-01 15:18:28.083571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:29.553 [2024-10-01 15:18:28.083613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.553 [2024-10-01 15:18:28.083711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:29.553 [2024-10-01 15:18:28.084442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.553 [2024-10-01 15:18:28.084567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:29.553 [2024-10-01 15:18:28.084659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:18:29.553 [2024-10-01 15:18:28.084706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.553 [2024-10-01 15:18:28.085287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.553 [2024-10-01 15:18:28.085405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:29.553 [2024-10-01 15:18:28.085518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:18:29.553 [2024-10-01 15:18:28.085566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.553 [2024-10-01 15:18:28.088516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.553 [2024-10-01 15:18:28.088626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:29.553 [2024-10-01 15:18:28.088645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.896 ms 00:18:29.553 [2024-10-01 15:18:28.088658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.553 [2024-10-01 15:18:28.094313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.553 [2024-10-01 15:18:28.094350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:29.553 [2024-10-01 15:18:28.094363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.607 ms 00:18:29.553 [2024-10-01 15:18:28.094396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.553 [2024-10-01 15:18:28.096078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.553 [2024-10-01 15:18:28.096220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:29.553 [2024-10-01 15:18:28.096241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:18:29.553 [2024-10-01 15:18:28.096253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.813 [2024-10-01 15:18:28.100604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.813 [2024-10-01 15:18:28.100659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:29.813 [2024-10-01 15:18:28.100674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.284 ms 00:18:29.813 [2024-10-01 15:18:28.100687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.813 [2024-10-01 15:18:28.100852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.813 [2024-10-01 15:18:28.100871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:29.813 [2024-10-01 15:18:28.100882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:29.813 [2024-10-01 15:18:28.100898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.813 [2024-10-01 15:18:28.102437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.813 [2024-10-01 15:18:28.102472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:29.813 [2024-10-01 15:18:28.102484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.510 ms 00:18:29.813 [2024-10-01 15:18:28.102499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.813 [2024-10-01 15:18:28.103898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.813 [2024-10-01 15:18:28.104031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:29.813 [2024-10-01 15:18:28.104050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:18:29.813 [2024-10-01 15:18:28.104062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.813 [2024-10-01 15:18:28.105135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.813 [2024-10-01 15:18:28.105167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:29.813 [2024-10-01 15:18:28.105189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:18:29.813 [2024-10-01 15:18:28.105202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.813 [2024-10-01 15:18:28.106473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.813 [2024-10-01 15:18:28.106506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:29.813 [2024-10-01 15:18:28.106518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:18:29.813 [2024-10-01 15:18:28.106530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.813 [2024-10-01 15:18:28.106586] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:29.813 [2024-10-01 15:18:28.106606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.106998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.107010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.107023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.107034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.107050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.107060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:29.813 [2024-10-01 15:18:28.107074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.107926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:29.814 [2024-10-01 15:18:28.108770] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:29.814 [2024-10-01 15:18:28.108781] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cd477c1e-ca00-4fe0-8c49-749bfe193526 00:18:29.814 [2024-10-01 15:18:28.108794] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:29.814 [2024-10-01 15:18:28.108804] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:29.814 [2024-10-01 15:18:28.108817] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:29.814 [2024-10-01 15:18:28.108827] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:29.814 [2024-10-01 15:18:28.108839] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:29.814 [2024-10-01 15:18:28.108850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:29.814 [2024-10-01 15:18:28.108866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:29.814 [2024-10-01 15:18:28.108876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:29.814 [2024-10-01 15:18:28.108887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:29.814 [2024-10-01 15:18:28.108897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.814 [2024-10-01 15:18:28.108910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:29.815 [2024-10-01 15:18:28.108921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.316 ms 00:18:29.815 [2024-10-01 15:18:28.108936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.110783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.815 [2024-10-01 15:18:28.110809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:29.815 [2024-10-01 15:18:28.110822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:18:29.815 [2024-10-01 15:18:28.110838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.110967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.815 [2024-10-01 15:18:28.110982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:29.815 [2024-10-01 15:18:28.110993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:29.815 [2024-10-01 15:18:28.111006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.118144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.118292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.815 [2024-10-01 15:18:28.118431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.118477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.118626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.118715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.815 [2024-10-01 15:18:28.118729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.118745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.118816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.118833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.815 [2024-10-01 15:18:28.118843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.118856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.118891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.118905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.815 [2024-10-01 15:18:28.118915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.118928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.133221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.133435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.815 [2024-10-01 15:18:28.133469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.133486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.143272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.143325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.815 [2024-10-01 15:18:28.143341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.143357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.143430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.143447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:29.815 [2024-10-01 15:18:28.143458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.143471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.143546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.143565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:29.815 [2024-10-01 15:18:28.143575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.143602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.143708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.143738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:29.815 [2024-10-01 15:18:28.143749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.143773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.143835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.143855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:29.815 [2024-10-01 15:18:28.143865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.143881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.143932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.143946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:29.815 [2024-10-01 15:18:28.143957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.143969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.144032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.815 [2024-10-01 15:18:28.144049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:29.815 [2024-10-01 15:18:28.144060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.815 [2024-10-01 15:18:28.144073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.815 [2024-10-01 15:18:28.144282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.130 ms, result 0 00:18:29.815 true 00:18:29.815 15:18:28 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 86679 00:18:29.815 15:18:28 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86679 ']' 00:18:29.815 15:18:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86679 00:18:29.815 15:18:28 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:29.815 15:18:28 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.815 15:18:28 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86679 00:18:29.815 killing process with pid 86679 00:18:29.815 15:18:28 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:29.815 15:18:28 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:29.815 15:18:28 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86679' 00:18:29.815 15:18:28 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86679 00:18:29.815 15:18:28 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86679 00:18:33.102 15:18:31 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:34.038 65536+0 records in 00:18:34.038 65536+0 records out 00:18:34.038 268435456 bytes (268 MB, 256 MiB) copied, 1.03028 s, 261 MB/s 00:18:34.038 15:18:32 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:34.038 [2024-10-01 15:18:32.349685] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:34.038 [2024-10-01 15:18:32.350047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86861 ] 00:18:34.038 [2024-10-01 15:18:32.516206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.038 [2024-10-01 15:18:32.566058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.297 [2024-10-01 15:18:32.668877] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:34.297 [2024-10-01 15:18:32.668956] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:34.297 [2024-10-01 15:18:32.827258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.297 [2024-10-01 15:18:32.827513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:34.297 [2024-10-01 15:18:32.827537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:34.297 [2024-10-01 15:18:32.827548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.297 [2024-10-01 15:18:32.830125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.297 [2024-10-01 15:18:32.830164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.297 [2024-10-01 15:18:32.830192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.552 ms 00:18:34.297 [2024-10-01 15:18:32.830203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.297 [2024-10-01 15:18:32.830279] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:34.297 [2024-10-01 15:18:32.830494] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:34.297 [2024-10-01 15:18:32.830520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.297 [2024-10-01 15:18:32.830531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.297 [2024-10-01 15:18:32.830551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:18:34.297 [2024-10-01 15:18:32.830567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.297 [2024-10-01 15:18:32.832226] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:34.297 [2024-10-01 15:18:32.834773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.297 [2024-10-01 15:18:32.834908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:34.297 [2024-10-01 15:18:32.834997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:18:34.297 [2024-10-01 15:18:32.835038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.297 [2024-10-01 15:18:32.835124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.297 [2024-10-01 15:18:32.835319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:34.297 [2024-10-01 15:18:32.835360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:34.297 [2024-10-01 15:18:32.835390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.297 [2024-10-01 15:18:32.841980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.297 [2024-10-01 15:18:32.842106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.297 [2024-10-01 15:18:32.842209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.530 ms 00:18:34.297 [2024-10-01 15:18:32.842247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.297 [2024-10-01 15:18:32.842401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.297 [2024-10-01 15:18:32.842502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.297 [2024-10-01 15:18:32.842545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:34.297 [2024-10-01 15:18:32.842574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.297 [2024-10-01 15:18:32.842626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.297 [2024-10-01 15:18:32.842658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:34.297 [2024-10-01 15:18:32.842702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:34.297 [2024-10-01 15:18:32.842730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.297 [2024-10-01 15:18:32.842847] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:34.557 [2024-10-01 15:18:32.844562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.557 [2024-10-01 15:18:32.844590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.557 [2024-10-01 15:18:32.844602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:18:34.557 [2024-10-01 15:18:32.844612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.557 [2024-10-01 15:18:32.844659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.557 [2024-10-01 15:18:32.844684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:34.557 [2024-10-01 15:18:32.844697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:34.557 [2024-10-01 15:18:32.844708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.557 [2024-10-01 15:18:32.844728] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:34.557 [2024-10-01 15:18:32.844749] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:34.557 [2024-10-01 15:18:32.844790] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:34.557 [2024-10-01 15:18:32.844809] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:34.557 [2024-10-01 15:18:32.844900] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:34.557 [2024-10-01 15:18:32.844920] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:34.557 [2024-10-01 15:18:32.844941] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:34.557 [2024-10-01 15:18:32.844954] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:34.557 [2024-10-01 15:18:32.844966] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:34.557 [2024-10-01 15:18:32.844977] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:34.557 [2024-10-01 15:18:32.844986] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:34.557 [2024-10-01 15:18:32.845003] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:34.557 [2024-10-01 15:18:32.845020] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:34.557 [2024-10-01 15:18:32.845037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.557 [2024-10-01 15:18:32.845050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:34.557 [2024-10-01 15:18:32.845064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:18:34.557 [2024-10-01 15:18:32.845074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.557 [2024-10-01 15:18:32.845150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.557 [2024-10-01 15:18:32.845184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:34.557 [2024-10-01 15:18:32.845195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:34.557 [2024-10-01 15:18:32.845205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.557 [2024-10-01 15:18:32.845298] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:34.557 [2024-10-01 15:18:32.845320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:34.557 [2024-10-01 15:18:32.845330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:34.557 [2024-10-01 15:18:32.845350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.557 [2024-10-01 15:18:32.845360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:34.557 [2024-10-01 15:18:32.845369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:34.557 [2024-10-01 15:18:32.845379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:34.558 [2024-10-01 15:18:32.845388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:34.558 [2024-10-01 15:18:32.845400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:34.558 [2024-10-01 15:18:32.845418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:34.558 [2024-10-01 15:18:32.845427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:34.558 [2024-10-01 15:18:32.845436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:34.558 [2024-10-01 15:18:32.845446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:34.558 [2024-10-01 15:18:32.845455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:34.558 [2024-10-01 15:18:32.845464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:34.558 [2024-10-01 15:18:32.845482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:34.558 [2024-10-01 15:18:32.845490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:34.558 [2024-10-01 15:18:32.845508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.558 [2024-10-01 15:18:32.845526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:34.558 [2024-10-01 15:18:32.845535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.558 [2024-10-01 15:18:32.845559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:34.558 [2024-10-01 15:18:32.845568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.558 [2024-10-01 15:18:32.845586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:34.558 [2024-10-01 15:18:32.845595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.558 [2024-10-01 15:18:32.845612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:34.558 [2024-10-01 15:18:32.845621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:34.558 [2024-10-01 15:18:32.845639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:34.558 [2024-10-01 15:18:32.845648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:34.558 [2024-10-01 15:18:32.845657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:34.558 [2024-10-01 15:18:32.845666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:34.558 [2024-10-01 15:18:32.845676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:34.558 [2024-10-01 15:18:32.845684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:34.558 [2024-10-01 15:18:32.845706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:34.558 [2024-10-01 15:18:32.845714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845723] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:34.558 [2024-10-01 15:18:32.845732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:34.558 [2024-10-01 15:18:32.845742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:34.558 [2024-10-01 15:18:32.845752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.558 [2024-10-01 15:18:32.845763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:34.558 [2024-10-01 15:18:32.845772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:34.558 [2024-10-01 15:18:32.845781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:34.558 [2024-10-01 15:18:32.845790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:34.558 [2024-10-01 15:18:32.845799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:34.558 [2024-10-01 15:18:32.845808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:34.558 [2024-10-01 15:18:32.845819] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:34.558 [2024-10-01 15:18:32.845830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:34.558 [2024-10-01 15:18:32.845842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:34.558 [2024-10-01 15:18:32.845856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:34.558 [2024-10-01 15:18:32.845867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:34.558 [2024-10-01 15:18:32.845877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:34.558 [2024-10-01 15:18:32.845887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:34.558 [2024-10-01 15:18:32.845897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:34.558 [2024-10-01 15:18:32.845907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:34.558 [2024-10-01 15:18:32.845917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:34.558 [2024-10-01 15:18:32.845927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:34.558 [2024-10-01 15:18:32.845937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:34.558 [2024-10-01 15:18:32.845954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:34.558 [2024-10-01 15:18:32.845964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:34.558 [2024-10-01 15:18:32.845974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:34.558 [2024-10-01 15:18:32.845984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:34.558 [2024-10-01 15:18:32.845994] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:34.558 [2024-10-01 15:18:32.846004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:34.558 [2024-10-01 15:18:32.846015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:34.558 [2024-10-01 15:18:32.846028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:34.558 [2024-10-01 15:18:32.846038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:34.558 [2024-10-01 15:18:32.846048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:34.558 [2024-10-01 15:18:32.846059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.558 [2024-10-01 15:18:32.846069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:34.558 [2024-10-01 15:18:32.846085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:18:34.558 [2024-10-01 15:18:32.846095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.558 [2024-10-01 15:18:32.866206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.558 [2024-10-01 15:18:32.866256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.558 [2024-10-01 15:18:32.866276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.090 ms 00:18:34.558 [2024-10-01 15:18:32.866289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.558 [2024-10-01 15:18:32.866453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.558 [2024-10-01 15:18:32.866469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:34.558 [2024-10-01 15:18:32.866484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:34.558 [2024-10-01 15:18:32.866503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.558 [2024-10-01 15:18:32.877704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.558 [2024-10-01 15:18:32.877879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:34.558 [2024-10-01 15:18:32.877901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.178 ms 00:18:34.558 [2024-10-01 15:18:32.877912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.558 [2024-10-01 15:18:32.878008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.558 [2024-10-01 15:18:32.878022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:34.558 [2024-10-01 15:18:32.878036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:34.558 [2024-10-01 15:18:32.878046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.558 [2024-10-01 15:18:32.878496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.558 [2024-10-01 15:18:32.878509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:34.558 [2024-10-01 15:18:32.878520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:18:34.558 [2024-10-01 15:18:32.878530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.558 [2024-10-01 15:18:32.878648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.558 [2024-10-01 15:18:32.878661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:34.558 [2024-10-01 15:18:32.878671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:34.558 [2024-10-01 15:18:32.878684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.558 [2024-10-01 15:18:32.884962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.885105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:34.559 [2024-10-01 15:18:32.885125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.265 ms 00:18:34.559 [2024-10-01 15:18:32.885136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.887637] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:34.559 [2024-10-01 15:18:32.887680] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:34.559 [2024-10-01 15:18:32.887701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.887712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:34.559 [2024-10-01 15:18:32.887722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.450 ms 00:18:34.559 [2024-10-01 15:18:32.887732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.900999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.901130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:34.559 [2024-10-01 15:18:32.901224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.243 ms 00:18:34.559 [2024-10-01 15:18:32.901272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.902898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.903026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:34.559 [2024-10-01 15:18:32.903096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.528 ms 00:18:34.559 [2024-10-01 15:18:32.903129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.904536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.904657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:34.559 [2024-10-01 15:18:32.904684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:18:34.559 [2024-10-01 15:18:32.904695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.904989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.905004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:34.559 [2024-10-01 15:18:32.905015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:18:34.559 [2024-10-01 15:18:32.905025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.924015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.924271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:34.559 [2024-10-01 15:18:32.924297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.991 ms 00:18:34.559 [2024-10-01 15:18:32.924339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.930802] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:34.559 [2024-10-01 15:18:32.947496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.947710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:34.559 [2024-10-01 15:18:32.947749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.078 ms 00:18:34.559 [2024-10-01 15:18:32.947760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.947883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.947897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:34.559 [2024-10-01 15:18:32.947909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:34.559 [2024-10-01 15:18:32.947919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.947981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.947997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:34.559 [2024-10-01 15:18:32.948008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:34.559 [2024-10-01 15:18:32.948018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.948042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.948061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:34.559 [2024-10-01 15:18:32.948071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:34.559 [2024-10-01 15:18:32.948082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.948117] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:34.559 [2024-10-01 15:18:32.948129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.948139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:34.559 [2024-10-01 15:18:32.948159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:34.559 [2024-10-01 15:18:32.948193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.951904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.951940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:34.559 [2024-10-01 15:18:32.951963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:18:34.559 [2024-10-01 15:18:32.951973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.952066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.559 [2024-10-01 15:18:32.952079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:34.559 [2024-10-01 15:18:32.952090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:34.559 [2024-10-01 15:18:32.952104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.559 [2024-10-01 15:18:32.953038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:34.559 [2024-10-01 15:18:32.953980] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 125.702 ms, result 0 00:18:34.559 [2024-10-01 15:18:32.954622] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:34.559 [2024-10-01 15:18:32.964531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:44.391  Copying: 28/256 [MB] (28 MBps) Copying: 55/256 [MB] (27 MBps) Copying: 82/256 [MB] (26 MBps) Copying: 107/256 [MB] (25 MBps) Copying: 133/256 [MB] (25 MBps) Copying: 159/256 [MB] (26 MBps) Copying: 185/256 [MB] (25 MBps) Copying: 210/256 [MB] (25 MBps) Copying: 235/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 26 MBps)[2024-10-01 15:18:42.761735] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.391 [2024-10-01 15:18:42.763087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.763115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:44.391 [2024-10-01 15:18:42.763130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:44.391 [2024-10-01 15:18:42.763140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.763167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:44.391 [2024-10-01 15:18:42.763834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.763853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:44.391 [2024-10-01 15:18:42.763864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:18:44.391 [2024-10-01 15:18:42.763874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.765653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.765695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:44.391 [2024-10-01 15:18:42.765708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.761 ms 00:18:44.391 [2024-10-01 15:18:42.765718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.772447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.772492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:44.391 [2024-10-01 15:18:42.772506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.721 ms 00:18:44.391 [2024-10-01 15:18:42.772516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.778185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.778229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:44.391 [2024-10-01 15:18:42.778242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.638 ms 00:18:44.391 [2024-10-01 15:18:42.778259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.779776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.779924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:44.391 [2024-10-01 15:18:42.779943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:18:44.391 [2024-10-01 15:18:42.779954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.783622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.783673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:44.391 [2024-10-01 15:18:42.783685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:18:44.391 [2024-10-01 15:18:42.783699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.783802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.783815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:44.391 [2024-10-01 15:18:42.783825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:44.391 [2024-10-01 15:18:42.783835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.785861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.785897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:44.391 [2024-10-01 15:18:42.785908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:18:44.391 [2024-10-01 15:18:42.785918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.787428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.787459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:44.391 [2024-10-01 15:18:42.787470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 00:18:44.391 [2024-10-01 15:18:42.787479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.788577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.788608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:44.391 [2024-10-01 15:18:42.788619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:18:44.391 [2024-10-01 15:18:42.788628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.391 [2024-10-01 15:18:42.789670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.391 [2024-10-01 15:18:42.789808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:44.391 [2024-10-01 15:18:42.789827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:18:44.391 [2024-10-01 15:18:42.789837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.392 [2024-10-01 15:18:42.789870] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:44.392 [2024-10-01 15:18:42.789887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.789912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.789923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.789934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.789944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.789955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.789965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.789975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.789986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.789996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:44.392 [2024-10-01 15:18:42.790603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:44.393 [2024-10-01 15:18:42.790959] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:44.393 [2024-10-01 15:18:42.790968] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cd477c1e-ca00-4fe0-8c49-749bfe193526 00:18:44.393 [2024-10-01 15:18:42.790979] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:44.393 [2024-10-01 15:18:42.791001] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:44.393 [2024-10-01 15:18:42.791011] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:44.393 [2024-10-01 15:18:42.791021] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:44.393 [2024-10-01 15:18:42.791030] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:44.393 [2024-10-01 15:18:42.791040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:44.393 [2024-10-01 15:18:42.791049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:44.393 [2024-10-01 15:18:42.791057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:44.393 [2024-10-01 15:18:42.791066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:44.393 [2024-10-01 15:18:42.791076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.393 [2024-10-01 15:18:42.791086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:44.393 [2024-10-01 15:18:42.791097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:18:44.393 [2024-10-01 15:18:42.791110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.393 [2024-10-01 15:18:42.792838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.393 [2024-10-01 15:18:42.792855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:44.393 [2024-10-01 15:18:42.792875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:18:44.393 [2024-10-01 15:18:42.792884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.393 [2024-10-01 15:18:42.792986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.393 [2024-10-01 15:18:42.792997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:44.393 [2024-10-01 15:18:42.793011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:44.393 [2024-10-01 15:18:42.793021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.393 [2024-10-01 15:18:42.799193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.393 [2024-10-01 15:18:42.799217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.393 [2024-10-01 15:18:42.799229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.393 [2024-10-01 15:18:42.799240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.393 [2024-10-01 15:18:42.799311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.393 [2024-10-01 15:18:42.799324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.393 [2024-10-01 15:18:42.799339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.393 [2024-10-01 15:18:42.799350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.393 [2024-10-01 15:18:42.799394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.393 [2024-10-01 15:18:42.799406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.393 [2024-10-01 15:18:42.799424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.393 [2024-10-01 15:18:42.799434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.393 [2024-10-01 15:18:42.799453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.393 [2024-10-01 15:18:42.799470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.393 [2024-10-01 15:18:42.799488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.393 [2024-10-01 15:18:42.799508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.393 [2024-10-01 15:18:42.812769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.393 [2024-10-01 15:18:42.812994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.393 [2024-10-01 15:18:42.813017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.393 [2024-10-01 15:18:42.813027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.393 [2024-10-01 15:18:42.821309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.393 [2024-10-01 15:18:42.821351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.394 [2024-10-01 15:18:42.821370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.394 [2024-10-01 15:18:42.821404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.394 [2024-10-01 15:18:42.821437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.394 [2024-10-01 15:18:42.821448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.394 [2024-10-01 15:18:42.821458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.394 [2024-10-01 15:18:42.821475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.394 [2024-10-01 15:18:42.821504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.394 [2024-10-01 15:18:42.821515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.394 [2024-10-01 15:18:42.821532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.394 [2024-10-01 15:18:42.821542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.394 [2024-10-01 15:18:42.821629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.394 [2024-10-01 15:18:42.821642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.394 [2024-10-01 15:18:42.821652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.394 [2024-10-01 15:18:42.821662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.394 [2024-10-01 15:18:42.821695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.394 [2024-10-01 15:18:42.821707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:44.394 [2024-10-01 15:18:42.821717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.394 [2024-10-01 15:18:42.821727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.394 [2024-10-01 15:18:42.821772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.394 [2024-10-01 15:18:42.821783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.394 [2024-10-01 15:18:42.821793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.394 [2024-10-01 15:18:42.821803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.394 [2024-10-01 15:18:42.821858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.394 [2024-10-01 15:18:42.821870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.394 [2024-10-01 15:18:42.821881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.394 [2024-10-01 15:18:42.821890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.394 [2024-10-01 15:18:42.822029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.025 ms, result 0 00:18:44.964 00:18:44.964 00:18:44.964 15:18:43 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:44.964 15:18:43 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=86969 00:18:44.964 15:18:43 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 86969 00:18:44.964 15:18:43 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86969 ']' 00:18:44.964 15:18:43 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.964 15:18:43 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.964 15:18:43 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.964 15:18:43 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.964 15:18:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:44.964 [2024-10-01 15:18:43.356138] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:44.964 [2024-10-01 15:18:43.356463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86969 ] 00:18:45.224 [2024-10-01 15:18:43.526436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.224 [2024-10-01 15:18:43.577429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.791 15:18:44 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.791 15:18:44 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:45.791 15:18:44 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:46.049 [2024-10-01 15:18:44.391875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.049 [2024-10-01 15:18:44.391956] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.049 [2024-10-01 15:18:44.566582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.049 [2024-10-01 15:18:44.566651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:46.049 [2024-10-01 15:18:44.566668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:46.049 [2024-10-01 15:18:44.566682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.049 [2024-10-01 15:18:44.569195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.049 [2024-10-01 15:18:44.569236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.049 [2024-10-01 15:18:44.569253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.489 ms 00:18:46.049 [2024-10-01 15:18:44.569265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.569356] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:46.050 [2024-10-01 15:18:44.569594] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:46.050 [2024-10-01 15:18:44.569612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.569631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.050 [2024-10-01 15:18:44.569643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:18:46.050 [2024-10-01 15:18:44.569655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.571197] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:46.050 [2024-10-01 15:18:44.573778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.573815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:46.050 [2024-10-01 15:18:44.573831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:18:46.050 [2024-10-01 15:18:44.573841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.573924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.573936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:46.050 [2024-10-01 15:18:44.573953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:46.050 [2024-10-01 15:18:44.573963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.580727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.580898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.050 [2024-10-01 15:18:44.580941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.724 ms 00:18:46.050 [2024-10-01 15:18:44.580953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.581098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.581115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.050 [2024-10-01 15:18:44.581129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:46.050 [2024-10-01 15:18:44.581140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.581243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.581267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:46.050 [2024-10-01 15:18:44.581282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:46.050 [2024-10-01 15:18:44.581297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.581328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:46.050 [2024-10-01 15:18:44.583001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.583023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.050 [2024-10-01 15:18:44.583035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:18:46.050 [2024-10-01 15:18:44.583047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.583094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.583113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:46.050 [2024-10-01 15:18:44.583124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:46.050 [2024-10-01 15:18:44.583140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.583162] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:46.050 [2024-10-01 15:18:44.583197] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:46.050 [2024-10-01 15:18:44.583243] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:46.050 [2024-10-01 15:18:44.583273] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:46.050 [2024-10-01 15:18:44.583364] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:46.050 [2024-10-01 15:18:44.583383] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:46.050 [2024-10-01 15:18:44.583397] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:46.050 [2024-10-01 15:18:44.583417] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:46.050 [2024-10-01 15:18:44.583429] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:46.050 [2024-10-01 15:18:44.583446] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:46.050 [2024-10-01 15:18:44.583462] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:46.050 [2024-10-01 15:18:44.583474] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:46.050 [2024-10-01 15:18:44.583484] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:46.050 [2024-10-01 15:18:44.583501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.583514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:46.050 [2024-10-01 15:18:44.583527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:18:46.050 [2024-10-01 15:18:44.583541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.583637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.583659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:46.050 [2024-10-01 15:18:44.583673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:46.050 [2024-10-01 15:18:44.583683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.050 [2024-10-01 15:18:44.583805] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:46.050 [2024-10-01 15:18:44.583826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:46.050 [2024-10-01 15:18:44.583844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.050 [2024-10-01 15:18:44.583854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.050 [2024-10-01 15:18:44.583870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:46.050 [2024-10-01 15:18:44.583881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:46.050 [2024-10-01 15:18:44.583895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:46.050 [2024-10-01 15:18:44.583905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:46.050 [2024-10-01 15:18:44.583928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:46.050 [2024-10-01 15:18:44.583938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.050 [2024-10-01 15:18:44.583951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:46.050 [2024-10-01 15:18:44.583961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:46.050 [2024-10-01 15:18:44.583973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.050 [2024-10-01 15:18:44.583983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:46.050 [2024-10-01 15:18:44.583996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:46.050 [2024-10-01 15:18:44.584006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.050 [2024-10-01 15:18:44.584018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:46.050 [2024-10-01 15:18:44.584028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:46.050 [2024-10-01 15:18:44.584040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.050 [2024-10-01 15:18:44.584050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:46.050 [2024-10-01 15:18:44.584065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:46.050 [2024-10-01 15:18:44.584075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.050 [2024-10-01 15:18:44.584087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:46.050 [2024-10-01 15:18:44.584097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:46.050 [2024-10-01 15:18:44.584108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.050 [2024-10-01 15:18:44.584118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:46.050 [2024-10-01 15:18:44.584130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:46.050 [2024-10-01 15:18:44.584140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.050 [2024-10-01 15:18:44.584157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:46.050 [2024-10-01 15:18:44.584167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:46.050 [2024-10-01 15:18:44.584491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.050 [2024-10-01 15:18:44.584529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:46.050 [2024-10-01 15:18:44.584567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:46.050 [2024-10-01 15:18:44.584598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.050 [2024-10-01 15:18:44.584632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:46.050 [2024-10-01 15:18:44.584662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:46.050 [2024-10-01 15:18:44.584755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.050 [2024-10-01 15:18:44.584792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:46.050 [2024-10-01 15:18:44.584826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:46.050 [2024-10-01 15:18:44.584869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.050 [2024-10-01 15:18:44.584900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:46.050 [2024-10-01 15:18:44.584929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:46.050 [2024-10-01 15:18:44.585080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.050 [2024-10-01 15:18:44.585109] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:46.050 [2024-10-01 15:18:44.585151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:46.050 [2024-10-01 15:18:44.585195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.050 [2024-10-01 15:18:44.585230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.050 [2024-10-01 15:18:44.585294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:46.050 [2024-10-01 15:18:44.585386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:46.050 [2024-10-01 15:18:44.585466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:46.050 [2024-10-01 15:18:44.585507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:46.050 [2024-10-01 15:18:44.585538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:46.050 [2024-10-01 15:18:44.585625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:46.050 [2024-10-01 15:18:44.585661] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:46.050 [2024-10-01 15:18:44.585732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.050 [2024-10-01 15:18:44.585847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:46.050 [2024-10-01 15:18:44.585941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:46.050 [2024-10-01 15:18:44.585994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:46.050 [2024-10-01 15:18:44.586083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:46.050 [2024-10-01 15:18:44.586138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:46.050 [2024-10-01 15:18:44.586203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:46.050 [2024-10-01 15:18:44.586292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:46.050 [2024-10-01 15:18:44.586465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:46.050 [2024-10-01 15:18:44.586517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:46.050 [2024-10-01 15:18:44.586570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:46.050 [2024-10-01 15:18:44.586620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:46.050 [2024-10-01 15:18:44.586774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:46.050 [2024-10-01 15:18:44.586824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:46.050 [2024-10-01 15:18:44.586881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:46.050 [2024-10-01 15:18:44.586931] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:46.050 [2024-10-01 15:18:44.586985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.050 [2024-10-01 15:18:44.587090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:46.050 [2024-10-01 15:18:44.587144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:46.050 [2024-10-01 15:18:44.587215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:46.050 [2024-10-01 15:18:44.587267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:46.050 [2024-10-01 15:18:44.587361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.050 [2024-10-01 15:18:44.587403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:46.050 [2024-10-01 15:18:44.587435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:18:46.050 [2024-10-01 15:18:44.587467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.599833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.600060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.308 [2024-10-01 15:18:44.600250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.279 ms 00:18:46.308 [2024-10-01 15:18:44.600297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.600480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.600529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:46.308 [2024-10-01 15:18:44.600615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:46.308 [2024-10-01 15:18:44.600657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.611800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.611990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.308 [2024-10-01 15:18:44.612084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.108 ms 00:18:46.308 [2024-10-01 15:18:44.612131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.612255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.612303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.308 [2024-10-01 15:18:44.612385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:46.308 [2024-10-01 15:18:44.612424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.612882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.612924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.308 [2024-10-01 15:18:44.612945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:18:46.308 [2024-10-01 15:18:44.612959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.613078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.613095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.308 [2024-10-01 15:18:44.613111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:46.308 [2024-10-01 15:18:44.613130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.635834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.635910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.308 [2024-10-01 15:18:44.635940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.714 ms 00:18:46.308 [2024-10-01 15:18:44.635980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.639378] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:46.308 [2024-10-01 15:18:44.639456] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:46.308 [2024-10-01 15:18:44.639486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.639514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:46.308 [2024-10-01 15:18:44.639539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:18:46.308 [2024-10-01 15:18:44.639564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.655065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.655166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:46.308 [2024-10-01 15:18:44.655198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.446 ms 00:18:46.308 [2024-10-01 15:18:44.655216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.657920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.658079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:46.308 [2024-10-01 15:18:44.658100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:18:46.308 [2024-10-01 15:18:44.658114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.659649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.659717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:46.308 [2024-10-01 15:18:44.659730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:18:46.308 [2024-10-01 15:18:44.659743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.660094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.660118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:46.308 [2024-10-01 15:18:44.660141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:18:46.308 [2024-10-01 15:18:44.660156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.681885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.681965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:46.308 [2024-10-01 15:18:44.681992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.716 ms 00:18:46.308 [2024-10-01 15:18:44.682009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.689069] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:46.308 [2024-10-01 15:18:44.706402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.706456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:46.308 [2024-10-01 15:18:44.706477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.305 ms 00:18:46.308 [2024-10-01 15:18:44.706488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.706612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.706636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:46.308 [2024-10-01 15:18:44.706651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:46.308 [2024-10-01 15:18:44.706665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.706732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.706744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:46.308 [2024-10-01 15:18:44.706760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:46.308 [2024-10-01 15:18:44.706771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.706799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.706810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:46.308 [2024-10-01 15:18:44.706837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:46.308 [2024-10-01 15:18:44.706847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.706895] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:46.308 [2024-10-01 15:18:44.706908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.706920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:46.308 [2024-10-01 15:18:44.706931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:46.308 [2024-10-01 15:18:44.706943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.710786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.710831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:46.308 [2024-10-01 15:18:44.710845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.825 ms 00:18:46.308 [2024-10-01 15:18:44.710859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.710970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.308 [2024-10-01 15:18:44.710986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:46.308 [2024-10-01 15:18:44.710998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:46.308 [2024-10-01 15:18:44.711018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.308 [2024-10-01 15:18:44.711994] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:46.308 [2024-10-01 15:18:44.713047] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.324 ms, result 0 00:18:46.308 [2024-10-01 15:18:44.714009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:46.308 Some configs were skipped because the RPC state that can call them passed over. 00:18:46.308 15:18:44 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:46.565 [2024-10-01 15:18:44.952530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.565 [2024-10-01 15:18:44.952787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:46.565 [2024-10-01 15:18:44.952822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:18:46.565 [2024-10-01 15:18:44.952858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.565 [2024-10-01 15:18:44.952917] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.003 ms, result 0 00:18:46.565 true 00:18:46.565 15:18:44 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:46.823 [2024-10-01 15:18:45.163983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.823 [2024-10-01 15:18:45.164057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:46.823 [2024-10-01 15:18:45.164075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:18:46.823 [2024-10-01 15:18:45.164089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.823 [2024-10-01 15:18:45.164131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.365 ms, result 0 00:18:46.823 true 00:18:46.823 15:18:45 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 86969 00:18:46.823 15:18:45 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86969 ']' 00:18:46.823 15:18:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86969 00:18:46.823 15:18:45 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:46.823 15:18:45 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.823 15:18:45 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86969 00:18:46.823 killing process with pid 86969 00:18:46.823 15:18:45 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:46.823 15:18:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:46.823 15:18:45 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86969' 00:18:46.823 15:18:45 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86969 00:18:46.823 15:18:45 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86969 00:18:46.823 [2024-10-01 15:18:45.364683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.823 [2024-10-01 15:18:45.364747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:46.823 [2024-10-01 15:18:45.364766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:46.823 [2024-10-01 15:18:45.364777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.823 [2024-10-01 15:18:45.364807] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:46.823 [2024-10-01 15:18:45.365461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.823 [2024-10-01 15:18:45.365476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:46.823 [2024-10-01 15:18:45.365487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:18:46.823 [2024-10-01 15:18:45.365500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.823 [2024-10-01 15:18:45.365780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.823 [2024-10-01 15:18:45.365796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:46.823 [2024-10-01 15:18:45.365807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:18:46.823 [2024-10-01 15:18:45.365821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.823 [2024-10-01 15:18:45.369036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.823 [2024-10-01 15:18:45.369077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:46.823 [2024-10-01 15:18:45.369090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.199 ms 00:18:46.823 [2024-10-01 15:18:45.369103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.082 [2024-10-01 15:18:45.374775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.082 [2024-10-01 15:18:45.374821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:47.082 [2024-10-01 15:18:45.374833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.643 ms 00:18:47.082 [2024-10-01 15:18:45.374848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.082 [2024-10-01 15:18:45.376470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.082 [2024-10-01 15:18:45.376512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:47.082 [2024-10-01 15:18:45.376525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:18:47.082 [2024-10-01 15:18:45.376537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.082 [2024-10-01 15:18:45.380372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.082 [2024-10-01 15:18:45.380413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:47.082 [2024-10-01 15:18:45.380426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.807 ms 00:18:47.082 [2024-10-01 15:18:45.380455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.082 [2024-10-01 15:18:45.380574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.082 [2024-10-01 15:18:45.380589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:47.082 [2024-10-01 15:18:45.380601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:47.082 [2024-10-01 15:18:45.380614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.082 [2024-10-01 15:18:45.382671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.082 [2024-10-01 15:18:45.382710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:47.082 [2024-10-01 15:18:45.382722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:18:47.082 [2024-10-01 15:18:45.382737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.082 [2024-10-01 15:18:45.384197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.082 [2024-10-01 15:18:45.384242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:47.082 [2024-10-01 15:18:45.384255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:18:47.082 [2024-10-01 15:18:45.384267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.082 [2024-10-01 15:18:45.385497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.082 [2024-10-01 15:18:45.385553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:47.082 [2024-10-01 15:18:45.385573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:18:47.082 [2024-10-01 15:18:45.385593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.082 [2024-10-01 15:18:45.386699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.082 [2024-10-01 15:18:45.386745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:47.082 [2024-10-01 15:18:45.386757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:18:47.082 [2024-10-01 15:18:45.386770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.082 [2024-10-01 15:18:45.386803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:47.082 [2024-10-01 15:18:45.386823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.386992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:47.082 [2024-10-01 15:18:45.387004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.387992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:47.083 [2024-10-01 15:18:45.388544] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:47.083 [2024-10-01 15:18:45.388579] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cd477c1e-ca00-4fe0-8c49-749bfe193526 00:18:47.083 [2024-10-01 15:18:45.388784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:47.084 [2024-10-01 15:18:45.388816] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:47.084 [2024-10-01 15:18:45.388851] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:47.084 [2024-10-01 15:18:45.388888] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:47.084 [2024-10-01 15:18:45.388925] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:47.084 [2024-10-01 15:18:45.388957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:47.084 [2024-10-01 15:18:45.389004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:47.084 [2024-10-01 15:18:45.389085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:47.084 [2024-10-01 15:18:45.389126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:47.084 [2024-10-01 15:18:45.389159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.084 [2024-10-01 15:18:45.389210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:47.084 [2024-10-01 15:18:45.389245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.361 ms 00:18:47.084 [2024-10-01 15:18:45.389286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.391094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.084 [2024-10-01 15:18:45.391237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:47.084 [2024-10-01 15:18:45.391324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:18:47.084 [2024-10-01 15:18:45.391367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.391560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.084 [2024-10-01 15:18:45.391617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:47.084 [2024-10-01 15:18:45.391803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:47.084 [2024-10-01 15:18:45.391855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.399057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.399245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.084 [2024-10-01 15:18:45.399339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.399390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.399575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.399613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.084 [2024-10-01 15:18:45.399628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.399661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.399730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.399753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.084 [2024-10-01 15:18:45.399772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.399790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.399815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.399833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.084 [2024-10-01 15:18:45.399846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.399864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.413969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.414052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.084 [2024-10-01 15:18:45.414069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.414084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.422456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.422528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.084 [2024-10-01 15:18:45.422544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.422564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.422634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.422653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.084 [2024-10-01 15:18:45.422665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.422688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.422720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.422736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.084 [2024-10-01 15:18:45.422747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.422761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.422846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.422865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.084 [2024-10-01 15:18:45.422875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.422890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.422933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.422952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:47.084 [2024-10-01 15:18:45.422963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.422982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.423024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.423038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.084 [2024-10-01 15:18:45.423048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.423061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.423110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.084 [2024-10-01 15:18:45.423124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.084 [2024-10-01 15:18:45.423134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.084 [2024-10-01 15:18:45.423147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.084 [2024-10-01 15:18:45.423307] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.695 ms, result 0 00:18:47.342 15:18:45 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:47.342 15:18:45 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:47.342 [2024-10-01 15:18:45.780981] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:47.342 [2024-10-01 15:18:45.781121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87011 ] 00:18:47.600 [2024-10-01 15:18:45.949584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.600 [2024-10-01 15:18:45.998055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.600 [2024-10-01 15:18:46.100873] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:47.600 [2024-10-01 15:18:46.100953] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:47.860 [2024-10-01 15:18:46.260000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.860 [2024-10-01 15:18:46.260279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:47.860 [2024-10-01 15:18:46.260306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:47.860 [2024-10-01 15:18:46.260318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.860 [2024-10-01 15:18:46.262895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.860 [2024-10-01 15:18:46.262930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.860 [2024-10-01 15:18:46.262946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:18:47.860 [2024-10-01 15:18:46.262956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.860 [2024-10-01 15:18:46.263050] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:47.860 [2024-10-01 15:18:46.263286] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:47.861 [2024-10-01 15:18:46.263312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.861 [2024-10-01 15:18:46.263323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.861 [2024-10-01 15:18:46.263337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:18:47.861 [2024-10-01 15:18:46.263348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.861 [2024-10-01 15:18:46.265052] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:47.861 [2024-10-01 15:18:46.267522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.861 [2024-10-01 15:18:46.267675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:47.861 [2024-10-01 15:18:46.267701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.476 ms 00:18:47.861 [2024-10-01 15:18:46.267732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.861 [2024-10-01 15:18:46.267814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.861 [2024-10-01 15:18:46.267828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:47.861 [2024-10-01 15:18:46.267840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:47.861 [2024-10-01 15:18:46.267850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.861 [2024-10-01 15:18:46.274519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.861 [2024-10-01 15:18:46.274660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.861 [2024-10-01 15:18:46.274680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 00:18:47.861 [2024-10-01 15:18:46.274690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.861 [2024-10-01 15:18:46.274813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.861 [2024-10-01 15:18:46.274836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.861 [2024-10-01 15:18:46.274854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:47.861 [2024-10-01 15:18:46.274864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.861 [2024-10-01 15:18:46.274895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.861 [2024-10-01 15:18:46.274906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:47.861 [2024-10-01 15:18:46.274922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:47.861 [2024-10-01 15:18:46.274932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.861 [2024-10-01 15:18:46.274956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:47.861 [2024-10-01 15:18:46.276731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.861 [2024-10-01 15:18:46.276761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.861 [2024-10-01 15:18:46.276772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:18:47.861 [2024-10-01 15:18:46.276792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.861 [2024-10-01 15:18:46.276848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.861 [2024-10-01 15:18:46.276864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:47.861 [2024-10-01 15:18:46.276878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:47.861 [2024-10-01 15:18:46.276888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.861 [2024-10-01 15:18:46.276907] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:47.861 [2024-10-01 15:18:46.276927] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:47.861 [2024-10-01 15:18:46.276970] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:47.861 [2024-10-01 15:18:46.276990] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:47.861 [2024-10-01 15:18:46.277081] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:47.861 [2024-10-01 15:18:46.277101] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:47.861 [2024-10-01 15:18:46.277115] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:47.861 [2024-10-01 15:18:46.277128] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:47.861 [2024-10-01 15:18:46.277140] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:47.861 [2024-10-01 15:18:46.277152] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:47.861 [2024-10-01 15:18:46.277185] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:47.861 [2024-10-01 15:18:46.277196] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:47.861 [2024-10-01 15:18:46.277205] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:47.861 [2024-10-01 15:18:46.277216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.861 [2024-10-01 15:18:46.277229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:47.861 [2024-10-01 15:18:46.277243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:18:47.861 [2024-10-01 15:18:46.277259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.861 [2024-10-01 15:18:46.277338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.861 [2024-10-01 15:18:46.277350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:47.861 [2024-10-01 15:18:46.277360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:47.861 [2024-10-01 15:18:46.277377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.861 [2024-10-01 15:18:46.277470] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:47.861 [2024-10-01 15:18:46.277491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:47.861 [2024-10-01 15:18:46.277502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.861 [2024-10-01 15:18:46.277524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:47.861 [2024-10-01 15:18:46.277543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:47.861 [2024-10-01 15:18:46.277562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:47.861 [2024-10-01 15:18:46.277574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.861 [2024-10-01 15:18:46.277593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:47.861 [2024-10-01 15:18:46.277602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:47.861 [2024-10-01 15:18:46.277610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.861 [2024-10-01 15:18:46.277619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:47.861 [2024-10-01 15:18:46.277629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:47.861 [2024-10-01 15:18:46.277638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:47.861 [2024-10-01 15:18:46.277656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:47.861 [2024-10-01 15:18:46.277665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:47.861 [2024-10-01 15:18:46.277683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.861 [2024-10-01 15:18:46.277701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:47.861 [2024-10-01 15:18:46.277709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.861 [2024-10-01 15:18:46.277734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:47.861 [2024-10-01 15:18:46.277743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.861 [2024-10-01 15:18:46.277760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:47.861 [2024-10-01 15:18:46.277769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.861 [2024-10-01 15:18:46.277787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:47.861 [2024-10-01 15:18:46.277796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.861 [2024-10-01 15:18:46.277814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:47.861 [2024-10-01 15:18:46.277823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:47.861 [2024-10-01 15:18:46.277831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.861 [2024-10-01 15:18:46.277840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:47.861 [2024-10-01 15:18:46.277849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:47.861 [2024-10-01 15:18:46.277858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:47.861 [2024-10-01 15:18:46.277879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:47.861 [2024-10-01 15:18:46.277887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277896] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:47.861 [2024-10-01 15:18:46.277907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:47.861 [2024-10-01 15:18:46.277916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.861 [2024-10-01 15:18:46.277928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.861 [2024-10-01 15:18:46.277938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:47.861 [2024-10-01 15:18:46.277948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:47.861 [2024-10-01 15:18:46.277957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:47.861 [2024-10-01 15:18:46.277966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:47.862 [2024-10-01 15:18:46.277975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:47.862 [2024-10-01 15:18:46.277984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:47.862 [2024-10-01 15:18:46.277994] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:47.862 [2024-10-01 15:18:46.278006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.862 [2024-10-01 15:18:46.278017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:47.862 [2024-10-01 15:18:46.278047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:47.862 [2024-10-01 15:18:46.278059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:47.862 [2024-10-01 15:18:46.278070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:47.862 [2024-10-01 15:18:46.278080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:47.862 [2024-10-01 15:18:46.278091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:47.862 [2024-10-01 15:18:46.278101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:47.862 [2024-10-01 15:18:46.278112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:47.862 [2024-10-01 15:18:46.278122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:47.862 [2024-10-01 15:18:46.278133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:47.862 [2024-10-01 15:18:46.278143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:47.862 [2024-10-01 15:18:46.278154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:47.862 [2024-10-01 15:18:46.278165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:47.862 [2024-10-01 15:18:46.278176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:47.862 [2024-10-01 15:18:46.278208] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:47.862 [2024-10-01 15:18:46.278221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.862 [2024-10-01 15:18:46.278233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:47.862 [2024-10-01 15:18:46.278247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:47.862 [2024-10-01 15:18:46.278259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:47.862 [2024-10-01 15:18:46.278270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:47.862 [2024-10-01 15:18:46.278282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.278293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:47.862 [2024-10-01 15:18:46.278315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.867 ms 00:18:47.862 [2024-10-01 15:18:46.278333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.298388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.298439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.862 [2024-10-01 15:18:46.298457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.029 ms 00:18:47.862 [2024-10-01 15:18:46.298470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.298632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.298648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:47.862 [2024-10-01 15:18:46.298662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:47.862 [2024-10-01 15:18:46.298681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.309754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.310049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.862 [2024-10-01 15:18:46.310074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.048 ms 00:18:47.862 [2024-10-01 15:18:46.310086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.310201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.310218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.862 [2024-10-01 15:18:46.310234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:47.862 [2024-10-01 15:18:46.310246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.310686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.310700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.862 [2024-10-01 15:18:46.310729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:18:47.862 [2024-10-01 15:18:46.310740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.310867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.310882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.862 [2024-10-01 15:18:46.310893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:47.862 [2024-10-01 15:18:46.310908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.317316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.317360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.862 [2024-10-01 15:18:46.317374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.393 ms 00:18:47.862 [2024-10-01 15:18:46.317392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.319927] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:47.862 [2024-10-01 15:18:46.319973] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:47.862 [2024-10-01 15:18:46.319988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.320000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:47.862 [2024-10-01 15:18:46.320012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.496 ms 00:18:47.862 [2024-10-01 15:18:46.320022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.334060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.334105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:47.862 [2024-10-01 15:18:46.334120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.008 ms 00:18:47.862 [2024-10-01 15:18:46.334131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.336533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.336688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:47.862 [2024-10-01 15:18:46.336709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:18:47.862 [2024-10-01 15:18:46.336721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.338220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.338253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:47.862 [2024-10-01 15:18:46.338274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:18:47.862 [2024-10-01 15:18:46.338284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.338621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.338651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:47.862 [2024-10-01 15:18:46.338664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:18:47.862 [2024-10-01 15:18:46.338678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.359911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.360188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:47.862 [2024-10-01 15:18:46.360215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.236 ms 00:18:47.862 [2024-10-01 15:18:46.360228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.367314] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:47.862 [2024-10-01 15:18:46.384797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.384869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:47.862 [2024-10-01 15:18:46.384886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.379 ms 00:18:47.862 [2024-10-01 15:18:46.384896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.385044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.385059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:47.862 [2024-10-01 15:18:46.385072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:47.862 [2024-10-01 15:18:46.385093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.385158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.385176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:47.862 [2024-10-01 15:18:46.385187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:47.862 [2024-10-01 15:18:46.385215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.385245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.385266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:47.862 [2024-10-01 15:18:46.385285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:47.862 [2024-10-01 15:18:46.385295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.385352] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:47.862 [2024-10-01 15:18:46.385365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.862 [2024-10-01 15:18:46.385386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:47.862 [2024-10-01 15:18:46.385397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:47.862 [2024-10-01 15:18:46.385408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.862 [2024-10-01 15:18:46.389242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.863 [2024-10-01 15:18:46.389279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:47.863 [2024-10-01 15:18:46.389292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.816 ms 00:18:47.863 [2024-10-01 15:18:46.389303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.863 [2024-10-01 15:18:46.389499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.863 [2024-10-01 15:18:46.389525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:47.863 [2024-10-01 15:18:46.389536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:47.863 [2024-10-01 15:18:46.389547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.863 [2024-10-01 15:18:46.390598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:47.863 [2024-10-01 15:18:46.391625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.498 ms, result 0 00:18:47.863 [2024-10-01 15:18:46.392262] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:47.863 [2024-10-01 15:18:46.401985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:57.414  Copying: 30/256 [MB] (30 MBps) Copying: 57/256 [MB] (27 MBps) Copying: 84/256 [MB] (26 MBps) Copying: 110/256 [MB] (26 MBps) Copying: 136/256 [MB] (26 MBps) Copying: 164/256 [MB] (28 MBps) Copying: 192/256 [MB] (27 MBps) Copying: 218/256 [MB] (26 MBps) Copying: 246/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-10-01 15:18:55.714162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:57.414 [2024-10-01 15:18:55.715530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.715695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:57.414 [2024-10-01 15:18:55.715721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:57.414 [2024-10-01 15:18:55.715740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.715774] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:57.414 [2024-10-01 15:18:55.716458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.716483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:57.414 [2024-10-01 15:18:55.716496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:18:57.414 [2024-10-01 15:18:55.716506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.716752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.716770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:57.414 [2024-10-01 15:18:55.716782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:18:57.414 [2024-10-01 15:18:55.716794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.719844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.719873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:57.414 [2024-10-01 15:18:55.719886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.037 ms 00:18:57.414 [2024-10-01 15:18:55.719897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.726115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.726148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:57.414 [2024-10-01 15:18:55.726162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.207 ms 00:18:57.414 [2024-10-01 15:18:55.726198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.727458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.727498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:57.414 [2024-10-01 15:18:55.727512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:18:57.414 [2024-10-01 15:18:55.727534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.731353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.731388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:57.414 [2024-10-01 15:18:55.731408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.802 ms 00:18:57.414 [2024-10-01 15:18:55.731418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.731527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.731540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:57.414 [2024-10-01 15:18:55.731551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:57.414 [2024-10-01 15:18:55.731561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.733322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.733356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:57.414 [2024-10-01 15:18:55.733367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:18:57.414 [2024-10-01 15:18:55.733377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.734890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.735034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:57.414 [2024-10-01 15:18:55.735053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.496 ms 00:18:57.414 [2024-10-01 15:18:55.735063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.736286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.736315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:57.414 [2024-10-01 15:18:55.736328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:18:57.414 [2024-10-01 15:18:55.736338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.737382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.414 [2024-10-01 15:18:55.737415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:57.414 [2024-10-01 15:18:55.737440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:18:57.414 [2024-10-01 15:18:55.737450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.414 [2024-10-01 15:18:55.737486] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:57.414 [2024-10-01 15:18:55.737508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:57.414 [2024-10-01 15:18:55.737687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.737996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:57.415 [2024-10-01 15:18:55.738688] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:57.415 [2024-10-01 15:18:55.738712] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cd477c1e-ca00-4fe0-8c49-749bfe193526 00:18:57.415 [2024-10-01 15:18:55.738737] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:57.416 [2024-10-01 15:18:55.738747] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:57.416 [2024-10-01 15:18:55.738758] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:57.416 [2024-10-01 15:18:55.738769] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:57.416 [2024-10-01 15:18:55.738779] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:57.416 [2024-10-01 15:18:55.738790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:57.416 [2024-10-01 15:18:55.738800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:57.416 [2024-10-01 15:18:55.738809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:57.416 [2024-10-01 15:18:55.738818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:57.416 [2024-10-01 15:18:55.738829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.416 [2024-10-01 15:18:55.738839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:57.416 [2024-10-01 15:18:55.738854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:18:57.416 [2024-10-01 15:18:55.738865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.740664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.416 [2024-10-01 15:18:55.740687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:57.416 [2024-10-01 15:18:55.740699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:18:57.416 [2024-10-01 15:18:55.740710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.740854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.416 [2024-10-01 15:18:55.740871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:57.416 [2024-10-01 15:18:55.740882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:18:57.416 [2024-10-01 15:18:55.740892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.747716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.747848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.416 [2024-10-01 15:18:55.747928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.747967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.748105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.748219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.416 [2024-10-01 15:18:55.748264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.748296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.748418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.748463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.416 [2024-10-01 15:18:55.748499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.748640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.748703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.748736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.416 [2024-10-01 15:18:55.748785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.748815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.762313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.762540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.416 [2024-10-01 15:18:55.762617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.762652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.770900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.771077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.416 [2024-10-01 15:18:55.771159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.771215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.771271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.771303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.416 [2024-10-01 15:18:55.771334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.771364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.771412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.771493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.416 [2024-10-01 15:18:55.771529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.771564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.771717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.771820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.416 [2024-10-01 15:18:55.771860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.771892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.771978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.772051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:57.416 [2024-10-01 15:18:55.772150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.772164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.772227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.772238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:57.416 [2024-10-01 15:18:55.772249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.772259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.772306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.416 [2024-10-01 15:18:55.772327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:57.416 [2024-10-01 15:18:55.772338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.416 [2024-10-01 15:18:55.772351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.416 [2024-10-01 15:18:55.772487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.021 ms, result 0 00:18:57.674 00:18:57.674 00:18:57.674 15:18:56 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:57.674 15:18:56 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:58.240 15:18:56 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:58.240 [2024-10-01 15:18:56.570564] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:58.240 [2024-10-01 15:18:56.570711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87126 ] 00:18:58.240 [2024-10-01 15:18:56.740326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.498 [2024-10-01 15:18:56.791368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.498 [2024-10-01 15:18:56.894878] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:58.498 [2024-10-01 15:18:56.894957] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:58.757 [2024-10-01 15:18:57.053249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.757 [2024-10-01 15:18:57.053319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:58.757 [2024-10-01 15:18:57.053336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:58.757 [2024-10-01 15:18:57.053346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.757 [2024-10-01 15:18:57.055994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.757 [2024-10-01 15:18:57.056040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:58.758 [2024-10-01 15:18:57.056064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.629 ms 00:18:58.758 [2024-10-01 15:18:57.056081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.056232] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:58.758 [2024-10-01 15:18:57.056474] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:58.758 [2024-10-01 15:18:57.056502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.758 [2024-10-01 15:18:57.056519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:58.758 [2024-10-01 15:18:57.056536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:18:58.758 [2024-10-01 15:18:57.056564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.058324] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:58.758 [2024-10-01 15:18:57.060802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.758 [2024-10-01 15:18:57.060853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:58.758 [2024-10-01 15:18:57.060873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.483 ms 00:18:58.758 [2024-10-01 15:18:57.060887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.060971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.758 [2024-10-01 15:18:57.060993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:58.758 [2024-10-01 15:18:57.061017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:58.758 [2024-10-01 15:18:57.061028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.067727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.758 [2024-10-01 15:18:57.067756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:58.758 [2024-10-01 15:18:57.067769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.667 ms 00:18:58.758 [2024-10-01 15:18:57.067780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.067899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.758 [2024-10-01 15:18:57.067931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:58.758 [2024-10-01 15:18:57.067943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:58.758 [2024-10-01 15:18:57.067954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.067992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.758 [2024-10-01 15:18:57.068005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:58.758 [2024-10-01 15:18:57.068020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:58.758 [2024-10-01 15:18:57.068037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.068062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:58.758 [2024-10-01 15:18:57.069793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.758 [2024-10-01 15:18:57.069821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:58.758 [2024-10-01 15:18:57.069834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:18:58.758 [2024-10-01 15:18:57.069844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.069900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.758 [2024-10-01 15:18:57.069917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:58.758 [2024-10-01 15:18:57.069931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:58.758 [2024-10-01 15:18:57.069941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.069962] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:58.758 [2024-10-01 15:18:57.069983] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:58.758 [2024-10-01 15:18:57.070019] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:58.758 [2024-10-01 15:18:57.070049] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:58.758 [2024-10-01 15:18:57.070144] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:58.758 [2024-10-01 15:18:57.070157] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:58.758 [2024-10-01 15:18:57.070186] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:58.758 [2024-10-01 15:18:57.070200] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:58.758 [2024-10-01 15:18:57.070212] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:58.758 [2024-10-01 15:18:57.070223] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:58.758 [2024-10-01 15:18:57.070233] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:58.758 [2024-10-01 15:18:57.070243] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:58.758 [2024-10-01 15:18:57.070253] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:58.758 [2024-10-01 15:18:57.070263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.758 [2024-10-01 15:18:57.070277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:58.758 [2024-10-01 15:18:57.070296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:18:58.758 [2024-10-01 15:18:57.070307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.070391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.758 [2024-10-01 15:18:57.070409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:58.758 [2024-10-01 15:18:57.070420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:58.758 [2024-10-01 15:18:57.070430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.758 [2024-10-01 15:18:57.070523] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:58.758 [2024-10-01 15:18:57.070561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:58.758 [2024-10-01 15:18:57.070572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:58.758 [2024-10-01 15:18:57.070587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:58.758 [2024-10-01 15:18:57.070608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:58.758 [2024-10-01 15:18:57.070641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:58.758 [2024-10-01 15:18:57.070655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:58.758 [2024-10-01 15:18:57.070674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:58.758 [2024-10-01 15:18:57.070683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:58.758 [2024-10-01 15:18:57.070693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:58.758 [2024-10-01 15:18:57.070702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:58.758 [2024-10-01 15:18:57.070712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:58.758 [2024-10-01 15:18:57.070721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:58.758 [2024-10-01 15:18:57.070739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:58.758 [2024-10-01 15:18:57.070748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:58.758 [2024-10-01 15:18:57.070766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.758 [2024-10-01 15:18:57.070783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:58.758 [2024-10-01 15:18:57.070792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.758 [2024-10-01 15:18:57.070816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:58.758 [2024-10-01 15:18:57.070825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.758 [2024-10-01 15:18:57.070842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:58.758 [2024-10-01 15:18:57.070851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:58.758 [2024-10-01 15:18:57.070869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:58.758 [2024-10-01 15:18:57.070878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:58.758 [2024-10-01 15:18:57.070895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:58.758 [2024-10-01 15:18:57.070904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:58.758 [2024-10-01 15:18:57.070913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:58.758 [2024-10-01 15:18:57.070922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:58.758 [2024-10-01 15:18:57.070931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:58.758 [2024-10-01 15:18:57.070940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:58.758 [2024-10-01 15:18:57.070961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:58.758 [2024-10-01 15:18:57.070971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.758 [2024-10-01 15:18:57.070980] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:58.758 [2024-10-01 15:18:57.070997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:58.758 [2024-10-01 15:18:57.071007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:58.758 [2024-10-01 15:18:57.071016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:58.759 [2024-10-01 15:18:57.071026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:58.759 [2024-10-01 15:18:57.071035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:58.759 [2024-10-01 15:18:57.071044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:58.759 [2024-10-01 15:18:57.071053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:58.759 [2024-10-01 15:18:57.071062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:58.759 [2024-10-01 15:18:57.071071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:58.759 [2024-10-01 15:18:57.071082] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:58.759 [2024-10-01 15:18:57.071094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:58.759 [2024-10-01 15:18:57.071105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:58.759 [2024-10-01 15:18:57.071118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:58.759 [2024-10-01 15:18:57.071129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:58.759 [2024-10-01 15:18:57.071138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:58.759 [2024-10-01 15:18:57.071148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:58.759 [2024-10-01 15:18:57.071158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:58.759 [2024-10-01 15:18:57.071168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:58.759 [2024-10-01 15:18:57.071178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:58.759 [2024-10-01 15:18:57.071189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:58.759 [2024-10-01 15:18:57.071209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:58.759 [2024-10-01 15:18:57.071220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:58.759 [2024-10-01 15:18:57.071230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:58.759 [2024-10-01 15:18:57.071240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:58.759 [2024-10-01 15:18:57.071251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:58.759 [2024-10-01 15:18:57.071261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:58.759 [2024-10-01 15:18:57.071272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:58.759 [2024-10-01 15:18:57.071284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:58.759 [2024-10-01 15:18:57.071297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:58.759 [2024-10-01 15:18:57.071307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:58.759 [2024-10-01 15:18:57.071317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:58.759 [2024-10-01 15:18:57.071328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.071345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:58.759 [2024-10-01 15:18:57.071359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:18:58.759 [2024-10-01 15:18:57.071370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.093315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.093375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:58.759 [2024-10-01 15:18:57.093409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.918 ms 00:18:58.759 [2024-10-01 15:18:57.093424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.093620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.093638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:58.759 [2024-10-01 15:18:57.093653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:58.759 [2024-10-01 15:18:57.093673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.105649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.105703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:58.759 [2024-10-01 15:18:57.105718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.961 ms 00:18:58.759 [2024-10-01 15:18:57.105746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.105858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.105872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:58.759 [2024-10-01 15:18:57.105888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:58.759 [2024-10-01 15:18:57.105898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.106359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.106379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:58.759 [2024-10-01 15:18:57.106391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:18:58.759 [2024-10-01 15:18:57.106402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.106529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.106548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:58.759 [2024-10-01 15:18:57.106560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:58.759 [2024-10-01 15:18:57.106574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.113009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.113076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:58.759 [2024-10-01 15:18:57.113090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.420 ms 00:18:58.759 [2024-10-01 15:18:57.113117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.115807] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:58.759 [2024-10-01 15:18:57.115850] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:58.759 [2024-10-01 15:18:57.115866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.115877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:58.759 [2024-10-01 15:18:57.115888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.627 ms 00:18:58.759 [2024-10-01 15:18:57.115898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.129731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.129878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:58.759 [2024-10-01 15:18:57.129911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.776 ms 00:18:58.759 [2024-10-01 15:18:57.129923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.131863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.131899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:58.759 [2024-10-01 15:18:57.131912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:18:58.759 [2024-10-01 15:18:57.131922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.133402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.133433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:58.759 [2024-10-01 15:18:57.133453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:18:58.759 [2024-10-01 15:18:57.133462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.133791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.133808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:58.759 [2024-10-01 15:18:57.133820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:18:58.759 [2024-10-01 15:18:57.133836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.154247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.154315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:58.759 [2024-10-01 15:18:57.154332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.418 ms 00:18:58.759 [2024-10-01 15:18:57.154344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.160873] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:58.759 [2024-10-01 15:18:57.178055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.178266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:58.759 [2024-10-01 15:18:57.178296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.662 ms 00:18:58.759 [2024-10-01 15:18:57.178309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.178443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.178458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:58.759 [2024-10-01 15:18:57.178470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:58.759 [2024-10-01 15:18:57.178491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.178563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.178578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:58.759 [2024-10-01 15:18:57.178590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:58.759 [2024-10-01 15:18:57.178609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.759 [2024-10-01 15:18:57.178639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.759 [2024-10-01 15:18:57.178659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:58.759 [2024-10-01 15:18:57.178670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:58.759 [2024-10-01 15:18:57.178681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.760 [2024-10-01 15:18:57.178717] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:58.760 [2024-10-01 15:18:57.178731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.760 [2024-10-01 15:18:57.178745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:58.760 [2024-10-01 15:18:57.178757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:58.760 [2024-10-01 15:18:57.178768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.760 [2024-10-01 15:18:57.182535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.760 [2024-10-01 15:18:57.182575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:58.760 [2024-10-01 15:18:57.182589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.747 ms 00:18:58.760 [2024-10-01 15:18:57.182626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.760 [2024-10-01 15:18:57.182731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.760 [2024-10-01 15:18:57.182749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:58.760 [2024-10-01 15:18:57.182774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:58.760 [2024-10-01 15:18:57.182784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.760 [2024-10-01 15:18:57.183795] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:58.760 [2024-10-01 15:18:57.184926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.451 ms, result 0 00:18:58.760 [2024-10-01 15:18:57.185639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:58.760 [2024-10-01 15:18:57.195217] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:59.019  Copying: 4096/4096 [kB] (average 28 MBps)[2024-10-01 15:18:57.335375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:59.019 [2024-10-01 15:18:57.336777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.336955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:59.019 [2024-10-01 15:18:57.336991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:59.019 [2024-10-01 15:18:57.337010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.337045] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:59.019 [2024-10-01 15:18:57.337724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.337742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:59.019 [2024-10-01 15:18:57.337754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:18:59.019 [2024-10-01 15:18:57.337764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.339641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.339688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:59.019 [2024-10-01 15:18:57.339719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:18:59.019 [2024-10-01 15:18:57.339730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.343186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.343309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:59.019 [2024-10-01 15:18:57.343409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.433 ms 00:18:59.019 [2024-10-01 15:18:57.343446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.349396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.349522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:59.019 [2024-10-01 15:18:57.349618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.901 ms 00:18:59.019 [2024-10-01 15:18:57.349653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.351148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.351284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:59.019 [2024-10-01 15:18:57.351304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:18:59.019 [2024-10-01 15:18:57.351326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.354985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.355143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:59.019 [2024-10-01 15:18:57.355194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:18:59.019 [2024-10-01 15:18:57.355222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.355371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.355384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:59.019 [2024-10-01 15:18:57.355395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:59.019 [2024-10-01 15:18:57.355418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.357218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.357252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:59.019 [2024-10-01 15:18:57.357264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:18:59.019 [2024-10-01 15:18:57.357274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.358646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.358684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:59.019 [2024-10-01 15:18:57.358696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:18:59.019 [2024-10-01 15:18:57.358706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.359763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.359797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:59.019 [2024-10-01 15:18:57.359809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:18:59.019 [2024-10-01 15:18:57.359819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.019 [2024-10-01 15:18:57.361051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.019 [2024-10-01 15:18:57.361086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:59.020 [2024-10-01 15:18:57.361098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:18:59.020 [2024-10-01 15:18:57.361108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.020 [2024-10-01 15:18:57.361136] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:59.020 [2024-10-01 15:18:57.361159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:59.020 [2024-10-01 15:18:57.361883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.361894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.361905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.361916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.361940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.361950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.361961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.361971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.361982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.361992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:59.021 [2024-10-01 15:18:57.362314] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:59.021 [2024-10-01 15:18:57.362324] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cd477c1e-ca00-4fe0-8c49-749bfe193526 00:18:59.021 [2024-10-01 15:18:57.362346] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:59.021 [2024-10-01 15:18:57.362356] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:59.021 [2024-10-01 15:18:57.362366] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:59.021 [2024-10-01 15:18:57.362376] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:59.021 [2024-10-01 15:18:57.362394] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:59.021 [2024-10-01 15:18:57.362404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:59.021 [2024-10-01 15:18:57.362413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:59.021 [2024-10-01 15:18:57.362423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:59.021 [2024-10-01 15:18:57.362431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:59.021 [2024-10-01 15:18:57.362440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.021 [2024-10-01 15:18:57.362450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:59.021 [2024-10-01 15:18:57.362464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:18:59.021 [2024-10-01 15:18:57.362490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.364241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.021 [2024-10-01 15:18:57.364263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:59.021 [2024-10-01 15:18:57.364275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.734 ms 00:18:59.021 [2024-10-01 15:18:57.364285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.364391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.021 [2024-10-01 15:18:57.364408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:59.021 [2024-10-01 15:18:57.364419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:59.021 [2024-10-01 15:18:57.364430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.370896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.021 [2024-10-01 15:18:57.371010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.021 [2024-10-01 15:18:57.371098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.021 [2024-10-01 15:18:57.371135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.371259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.021 [2024-10-01 15:18:57.371311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.021 [2024-10-01 15:18:57.371342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.021 [2024-10-01 15:18:57.371380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.371508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.021 [2024-10-01 15:18:57.371548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.021 [2024-10-01 15:18:57.371579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.021 [2024-10-01 15:18:57.371609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.371657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.021 [2024-10-01 15:18:57.371708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.021 [2024-10-01 15:18:57.371862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.021 [2024-10-01 15:18:57.371895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.385586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.021 [2024-10-01 15:18:57.385813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.021 [2024-10-01 15:18:57.385973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.021 [2024-10-01 15:18:57.386015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.394402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.021 [2024-10-01 15:18:57.394587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.021 [2024-10-01 15:18:57.394727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.021 [2024-10-01 15:18:57.394767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.394826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.021 [2024-10-01 15:18:57.394859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.021 [2024-10-01 15:18:57.394892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.021 [2024-10-01 15:18:57.394923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.394976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.021 [2024-10-01 15:18:57.395113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.021 [2024-10-01 15:18:57.395197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.021 [2024-10-01 15:18:57.395236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.395349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.021 [2024-10-01 15:18:57.395388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.021 [2024-10-01 15:18:57.395478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.021 [2024-10-01 15:18:57.395515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.021 [2024-10-01 15:18:57.395586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.021 [2024-10-01 15:18:57.395624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:59.021 [2024-10-01 15:18:57.395668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.021 [2024-10-01 15:18:57.395701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.022 [2024-10-01 15:18:57.395842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.022 [2024-10-01 15:18:57.395883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.022 [2024-10-01 15:18:57.395918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.022 [2024-10-01 15:18:57.395950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.022 [2024-10-01 15:18:57.396018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.022 [2024-10-01 15:18:57.396072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.022 [2024-10-01 15:18:57.396105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.022 [2024-10-01 15:18:57.396143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.022 [2024-10-01 15:18:57.396381] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.655 ms, result 0 00:18:59.280 00:18:59.280 00:18:59.280 15:18:57 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=87140 00:18:59.280 15:18:57 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 87140 00:18:59.280 15:18:57 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 87140 ']' 00:18:59.280 15:18:57 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:59.280 15:18:57 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.280 15:18:57 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.280 15:18:57 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.280 15:18:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.280 15:18:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:59.280 [2024-10-01 15:18:57.780492] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:18:59.280 [2024-10-01 15:18:57.780848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87140 ] 00:18:59.539 [2024-10-01 15:18:57.956610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.539 [2024-10-01 15:18:58.003371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.104 15:18:58 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.104 15:18:58 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:00.104 15:18:58 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:00.363 [2024-10-01 15:18:58.811715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:00.363 [2024-10-01 15:18:58.811781] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:00.630 [2024-10-01 15:18:58.986693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:58.986760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:00.630 [2024-10-01 15:18:58.986776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:00.630 [2024-10-01 15:18:58.986790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:58.989303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:58.989348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:00.630 [2024-10-01 15:18:58.989364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.496 ms 00:19:00.630 [2024-10-01 15:18:58.989385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:58.989476] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:00.630 [2024-10-01 15:18:58.989694] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:00.630 [2024-10-01 15:18:58.989711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:58.989726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:00.630 [2024-10-01 15:18:58.989738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:19:00.630 [2024-10-01 15:18:58.989750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:58.991223] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:00.630 [2024-10-01 15:18:58.993650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:58.993688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:00.630 [2024-10-01 15:18:58.993705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.429 ms 00:19:00.630 [2024-10-01 15:18:58.993715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:58.993778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:58.993791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:00.630 [2024-10-01 15:18:58.993807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:00.630 [2024-10-01 15:18:58.993817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:59.000429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:59.000583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:00.630 [2024-10-01 15:18:59.000609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.565 ms 00:19:00.630 [2024-10-01 15:18:59.000620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:59.000729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:59.000742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:00.630 [2024-10-01 15:18:59.000755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:00.630 [2024-10-01 15:18:59.000772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:59.000804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:59.000818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:00.630 [2024-10-01 15:18:59.000831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:00.630 [2024-10-01 15:18:59.000850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:59.000878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:00.630 [2024-10-01 15:18:59.002494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:59.002536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:00.630 [2024-10-01 15:18:59.002548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:19:00.630 [2024-10-01 15:18:59.002560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:59.002616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:59.002630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:00.630 [2024-10-01 15:18:59.002641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:00.630 [2024-10-01 15:18:59.002653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:59.002687] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:00.630 [2024-10-01 15:18:59.002713] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:00.630 [2024-10-01 15:18:59.002765] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:00.630 [2024-10-01 15:18:59.002812] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:00.630 [2024-10-01 15:18:59.002921] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:00.630 [2024-10-01 15:18:59.002941] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:00.630 [2024-10-01 15:18:59.002955] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:00.630 [2024-10-01 15:18:59.002970] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:00.630 [2024-10-01 15:18:59.002982] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:00.630 [2024-10-01 15:18:59.003005] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:00.630 [2024-10-01 15:18:59.003015] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:00.630 [2024-10-01 15:18:59.003027] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:00.630 [2024-10-01 15:18:59.003037] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:00.630 [2024-10-01 15:18:59.003050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:59.003062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:00.630 [2024-10-01 15:18:59.003075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:19:00.630 [2024-10-01 15:18:59.003085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:59.003163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.630 [2024-10-01 15:18:59.003189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:00.630 [2024-10-01 15:18:59.003202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:00.630 [2024-10-01 15:18:59.003213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.630 [2024-10-01 15:18:59.003306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:00.630 [2024-10-01 15:18:59.003318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:00.630 [2024-10-01 15:18:59.003335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:00.630 [2024-10-01 15:18:59.003347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.630 [2024-10-01 15:18:59.003362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:00.630 [2024-10-01 15:18:59.003371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:00.631 [2024-10-01 15:18:59.003392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:00.631 [2024-10-01 15:18:59.003414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:00.631 [2024-10-01 15:18:59.003435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:00.631 [2024-10-01 15:18:59.003444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:00.631 [2024-10-01 15:18:59.003456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:00.631 [2024-10-01 15:18:59.003466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:00.631 [2024-10-01 15:18:59.003477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:00.631 [2024-10-01 15:18:59.003487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:00.631 [2024-10-01 15:18:59.003508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:00.631 [2024-10-01 15:18:59.003520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:00.631 [2024-10-01 15:18:59.003543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.631 [2024-10-01 15:18:59.003563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:00.631 [2024-10-01 15:18:59.003572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.631 [2024-10-01 15:18:59.003593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:00.631 [2024-10-01 15:18:59.003606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.631 [2024-10-01 15:18:59.003626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:00.631 [2024-10-01 15:18:59.003636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.631 [2024-10-01 15:18:59.003664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:00.631 [2024-10-01 15:18:59.003675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:00.631 [2024-10-01 15:18:59.003696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:00.631 [2024-10-01 15:18:59.003706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:00.631 [2024-10-01 15:18:59.003720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:00.631 [2024-10-01 15:18:59.003729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:00.631 [2024-10-01 15:18:59.003740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:00.631 [2024-10-01 15:18:59.003750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:00.631 [2024-10-01 15:18:59.003770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:00.631 [2024-10-01 15:18:59.003781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003790] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:00.631 [2024-10-01 15:18:59.003803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:00.631 [2024-10-01 15:18:59.003812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:00.631 [2024-10-01 15:18:59.003824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.631 [2024-10-01 15:18:59.003836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:00.631 [2024-10-01 15:18:59.003848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:00.631 [2024-10-01 15:18:59.003857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:00.631 [2024-10-01 15:18:59.003869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:00.631 [2024-10-01 15:18:59.003878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:00.631 [2024-10-01 15:18:59.003894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:00.631 [2024-10-01 15:18:59.003904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:00.631 [2024-10-01 15:18:59.003918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:00.631 [2024-10-01 15:18:59.003930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:00.631 [2024-10-01 15:18:59.003942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:00.631 [2024-10-01 15:18:59.003952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:00.631 [2024-10-01 15:18:59.003964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:00.631 [2024-10-01 15:18:59.003975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:00.631 [2024-10-01 15:18:59.003988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:00.631 [2024-10-01 15:18:59.003998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:00.631 [2024-10-01 15:18:59.004010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:00.631 [2024-10-01 15:18:59.004020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:00.631 [2024-10-01 15:18:59.004033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:00.631 [2024-10-01 15:18:59.004043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:00.631 [2024-10-01 15:18:59.004055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:00.631 [2024-10-01 15:18:59.004065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:00.631 [2024-10-01 15:18:59.004080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:00.631 [2024-10-01 15:18:59.004090] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:00.631 [2024-10-01 15:18:59.004104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:00.631 [2024-10-01 15:18:59.004116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:00.631 [2024-10-01 15:18:59.004128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:00.631 [2024-10-01 15:18:59.004139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:00.631 [2024-10-01 15:18:59.004151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:00.631 [2024-10-01 15:18:59.004161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.004199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:00.631 [2024-10-01 15:18:59.004210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:19:00.631 [2024-10-01 15:18:59.004231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.015865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.016041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:00.631 [2024-10-01 15:18:59.016063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.591 ms 00:19:00.631 [2024-10-01 15:18:59.016077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.016241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.016261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:00.631 [2024-10-01 15:18:59.016275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:00.631 [2024-10-01 15:18:59.016287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.026984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.027151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:00.631 [2024-10-01 15:18:59.027186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.691 ms 00:19:00.631 [2024-10-01 15:18:59.027201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.027280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.027299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:00.631 [2024-10-01 15:18:59.027311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:00.631 [2024-10-01 15:18:59.027323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.027756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.027772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:00.631 [2024-10-01 15:18:59.027783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:19:00.631 [2024-10-01 15:18:59.027795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.027912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.027931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:00.631 [2024-10-01 15:18:59.027944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:00.631 [2024-10-01 15:18:59.027956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.047593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.047785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:00.631 [2024-10-01 15:18:59.047813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.645 ms 00:19:00.631 [2024-10-01 15:18:59.047831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.050742] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:00.631 [2024-10-01 15:18:59.050789] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:00.631 [2024-10-01 15:18:59.050816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.050835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:00.631 [2024-10-01 15:18:59.050850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.840 ms 00:19:00.631 [2024-10-01 15:18:59.050866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.064462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.064597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:00.631 [2024-10-01 15:18:59.064670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.565 ms 00:19:00.631 [2024-10-01 15:18:59.064712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.066456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.066584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:00.631 [2024-10-01 15:18:59.066665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:19:00.631 [2024-10-01 15:18:59.066702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.068187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.068222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:00.631 [2024-10-01 15:18:59.068234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:19:00.631 [2024-10-01 15:18:59.068246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.068558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.068583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:00.631 [2024-10-01 15:18:59.068603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:19:00.631 [2024-10-01 15:18:59.068623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.088610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.088684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:00.631 [2024-10-01 15:18:59.088701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.987 ms 00:19:00.631 [2024-10-01 15:18:59.088727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.095035] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:00.631 [2024-10-01 15:18:59.110952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.111005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:00.631 [2024-10-01 15:18:59.111051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.170 ms 00:19:00.631 [2024-10-01 15:18:59.111068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.111205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.111220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:00.631 [2024-10-01 15:18:59.111235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:00.631 [2024-10-01 15:18:59.111249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.111324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.111337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:00.631 [2024-10-01 15:18:59.111360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:00.631 [2024-10-01 15:18:59.111371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.111407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.111417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:00.631 [2024-10-01 15:18:59.111433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:00.631 [2024-10-01 15:18:59.111444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.111486] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:00.631 [2024-10-01 15:18:59.111498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.111511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:00.631 [2024-10-01 15:18:59.111521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:00.631 [2024-10-01 15:18:59.111533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.115302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.115345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:00.631 [2024-10-01 15:18:59.115358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.751 ms 00:19:00.631 [2024-10-01 15:18:59.115380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.115478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.631 [2024-10-01 15:18:59.115494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:00.631 [2024-10-01 15:18:59.115505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:00.631 [2024-10-01 15:18:59.115517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.631 [2024-10-01 15:18:59.116523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:00.631 [2024-10-01 15:18:59.117475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.725 ms, result 0 00:19:00.631 [2024-10-01 15:18:59.118504] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:00.631 Some configs were skipped because the RPC state that can call them passed over. 00:19:00.631 15:18:59 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:00.890 [2024-10-01 15:18:59.385034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.890 [2024-10-01 15:18:59.385294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:00.890 [2024-10-01 15:18:59.385391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:19:00.890 [2024-10-01 15:18:59.385430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.890 [2024-10-01 15:18:59.385527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.095 ms, result 0 00:19:00.890 true 00:19:00.890 15:18:59 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:01.149 [2024-10-01 15:18:59.596457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.149 [2024-10-01 15:18:59.596520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:01.149 [2024-10-01 15:18:59.596537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:19:01.149 [2024-10-01 15:18:59.596551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.149 [2024-10-01 15:18:59.596591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.253 ms, result 0 00:19:01.149 true 00:19:01.149 15:18:59 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 87140 00:19:01.149 15:18:59 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 87140 ']' 00:19:01.149 15:18:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 87140 00:19:01.149 15:18:59 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:01.149 15:18:59 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:01.149 15:18:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87140 00:19:01.149 killing process with pid 87140 00:19:01.149 15:18:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:01.149 15:18:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:01.149 15:18:59 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87140' 00:19:01.149 15:18:59 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 87140 00:19:01.149 15:18:59 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 87140 00:19:01.410 [2024-10-01 15:18:59.798659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.410 [2024-10-01 15:18:59.798729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:01.410 [2024-10-01 15:18:59.798748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:01.410 [2024-10-01 15:18:59.798759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.410 [2024-10-01 15:18:59.798789] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:01.410 [2024-10-01 15:18:59.799452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.410 [2024-10-01 15:18:59.799467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:01.410 [2024-10-01 15:18:59.799484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:19:01.410 [2024-10-01 15:18:59.799496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.410 [2024-10-01 15:18:59.799765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.410 [2024-10-01 15:18:59.799783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:01.410 [2024-10-01 15:18:59.799794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:19:01.410 [2024-10-01 15:18:59.799806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.410 [2024-10-01 15:18:59.803113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.410 [2024-10-01 15:18:59.803153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:01.410 [2024-10-01 15:18:59.803166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.291 ms 00:19:01.410 [2024-10-01 15:18:59.803187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.410 [2024-10-01 15:18:59.808903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.410 [2024-10-01 15:18:59.808947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:01.410 [2024-10-01 15:18:59.808959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.686 ms 00:19:01.410 [2024-10-01 15:18:59.808973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.410 [2024-10-01 15:18:59.810386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.410 [2024-10-01 15:18:59.810426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:01.410 [2024-10-01 15:18:59.810438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.340 ms 00:19:01.410 [2024-10-01 15:18:59.810449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.410 [2024-10-01 15:18:59.813908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.410 [2024-10-01 15:18:59.814081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:01.410 [2024-10-01 15:18:59.814102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:19:01.410 [2024-10-01 15:18:59.814116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.410 [2024-10-01 15:18:59.814300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.411 [2024-10-01 15:18:59.814319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:01.411 [2024-10-01 15:18:59.814331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:01.411 [2024-10-01 15:18:59.814344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.411 [2024-10-01 15:18:59.816400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.411 [2024-10-01 15:18:59.816442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:01.411 [2024-10-01 15:18:59.816454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:19:01.411 [2024-10-01 15:18:59.816469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.411 [2024-10-01 15:18:59.817994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.411 [2024-10-01 15:18:59.818033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:01.411 [2024-10-01 15:18:59.818045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:19:01.411 [2024-10-01 15:18:59.818057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.411 [2024-10-01 15:18:59.819112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.411 [2024-10-01 15:18:59.819153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:01.411 [2024-10-01 15:18:59.819164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:19:01.411 [2024-10-01 15:18:59.819187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.411 [2024-10-01 15:18:59.820299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.411 [2024-10-01 15:18:59.820430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:01.411 [2024-10-01 15:18:59.820448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:19:01.411 [2024-10-01 15:18:59.820461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.411 [2024-10-01 15:18:59.820497] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:01.411 [2024-10-01 15:18:59.820515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.820988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:01.411 [2024-10-01 15:18:59.821408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:01.412 [2024-10-01 15:18:59.821758] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:01.412 [2024-10-01 15:18:59.821768] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cd477c1e-ca00-4fe0-8c49-749bfe193526 00:19:01.412 [2024-10-01 15:18:59.821782] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:01.412 [2024-10-01 15:18:59.821792] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:01.412 [2024-10-01 15:18:59.821811] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:01.412 [2024-10-01 15:18:59.821824] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:01.412 [2024-10-01 15:18:59.821836] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:01.412 [2024-10-01 15:18:59.821847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:01.412 [2024-10-01 15:18:59.821864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:01.412 [2024-10-01 15:18:59.821873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:01.412 [2024-10-01 15:18:59.821884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:01.412 [2024-10-01 15:18:59.821894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.412 [2024-10-01 15:18:59.821907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:01.412 [2024-10-01 15:18:59.821917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:19:01.412 [2024-10-01 15:18:59.821932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.823635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.412 [2024-10-01 15:18:59.823766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:01.412 [2024-10-01 15:18:59.823785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:19:01.412 [2024-10-01 15:18:59.823798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.823913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.412 [2024-10-01 15:18:59.823927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:01.412 [2024-10-01 15:18:59.823937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:01.412 [2024-10-01 15:18:59.823950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.830852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.831021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.412 [2024-10-01 15:18:59.831041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.831055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.831145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.831160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.412 [2024-10-01 15:18:59.831189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.831205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.831259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.831275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.412 [2024-10-01 15:18:59.831288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.831301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.831320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.831333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.412 [2024-10-01 15:18:59.831351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.831363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.844738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.844791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.412 [2024-10-01 15:18:59.844811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.844825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.853073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.853126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.412 [2024-10-01 15:18:59.853147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.853165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.853233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.853249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.412 [2024-10-01 15:18:59.853260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.853276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.853307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.853321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.412 [2024-10-01 15:18:59.853331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.853343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.853424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.853440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.412 [2024-10-01 15:18:59.853451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.853463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.853507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.853523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:01.412 [2024-10-01 15:18:59.853533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.853548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.853595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.853610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.412 [2024-10-01 15:18:59.853620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.853633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.853684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.412 [2024-10-01 15:18:59.853699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.412 [2024-10-01 15:18:59.853709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.412 [2024-10-01 15:18:59.853722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.412 [2024-10-01 15:18:59.853865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.276 ms, result 0 00:19:01.671 15:19:00 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:01.930 [2024-10-01 15:19:00.220765] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:19:01.930 [2024-10-01 15:19:00.220895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87182 ] 00:19:01.930 [2024-10-01 15:19:00.388761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.930 [2024-10-01 15:19:00.433040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.191 [2024-10-01 15:19:00.536307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.191 [2024-10-01 15:19:00.536380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.191 [2024-10-01 15:19:00.694841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.694898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:02.191 [2024-10-01 15:19:00.694915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.191 [2024-10-01 15:19:00.694935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.697428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.697572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.191 [2024-10-01 15:19:00.697599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.475 ms 00:19:02.191 [2024-10-01 15:19:00.697609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.697746] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:02.191 [2024-10-01 15:19:00.697981] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:02.191 [2024-10-01 15:19:00.698003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.698014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.191 [2024-10-01 15:19:00.698028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:02.191 [2024-10-01 15:19:00.698038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.699492] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:02.191 [2024-10-01 15:19:00.701940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.701974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:02.191 [2024-10-01 15:19:00.701992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.453 ms 00:19:02.191 [2024-10-01 15:19:00.702005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.702066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.702079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:02.191 [2024-10-01 15:19:00.702090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:02.191 [2024-10-01 15:19:00.702100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.708676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.708821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.191 [2024-10-01 15:19:00.708842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.551 ms 00:19:02.191 [2024-10-01 15:19:00.708852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.708982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.709000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.191 [2024-10-01 15:19:00.709012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:02.191 [2024-10-01 15:19:00.709029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.709059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.709071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:02.191 [2024-10-01 15:19:00.709086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:02.191 [2024-10-01 15:19:00.709103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.709128] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:02.191 [2024-10-01 15:19:00.710748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.710775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.191 [2024-10-01 15:19:00.710787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.630 ms 00:19:02.191 [2024-10-01 15:19:00.710797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.710866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.710882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:02.191 [2024-10-01 15:19:00.710896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:02.191 [2024-10-01 15:19:00.710913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.710933] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:02.191 [2024-10-01 15:19:00.710954] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:02.191 [2024-10-01 15:19:00.710989] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:02.191 [2024-10-01 15:19:00.711007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:02.191 [2024-10-01 15:19:00.711100] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:02.191 [2024-10-01 15:19:00.711113] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:02.191 [2024-10-01 15:19:00.711126] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:02.191 [2024-10-01 15:19:00.711146] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:02.191 [2024-10-01 15:19:00.711159] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:02.191 [2024-10-01 15:19:00.711186] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:02.191 [2024-10-01 15:19:00.711197] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:02.191 [2024-10-01 15:19:00.711207] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:02.191 [2024-10-01 15:19:00.711216] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:02.191 [2024-10-01 15:19:00.711227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.711248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:02.191 [2024-10-01 15:19:00.711261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:19:02.191 [2024-10-01 15:19:00.711271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.711355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.191 [2024-10-01 15:19:00.711366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:02.191 [2024-10-01 15:19:00.711377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:02.191 [2024-10-01 15:19:00.711387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.191 [2024-10-01 15:19:00.711483] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:02.191 [2024-10-01 15:19:00.711504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:02.191 [2024-10-01 15:19:00.711515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.191 [2024-10-01 15:19:00.711534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.191 [2024-10-01 15:19:00.711545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:02.191 [2024-10-01 15:19:00.711554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:02.191 [2024-10-01 15:19:00.711563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:02.191 [2024-10-01 15:19:00.711572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:02.191 [2024-10-01 15:19:00.711589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:02.191 [2024-10-01 15:19:00.711599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.191 [2024-10-01 15:19:00.711608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:02.191 [2024-10-01 15:19:00.711621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:02.191 [2024-10-01 15:19:00.711630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.191 [2024-10-01 15:19:00.711639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:02.191 [2024-10-01 15:19:00.711662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:02.191 [2024-10-01 15:19:00.711672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.191 [2024-10-01 15:19:00.711681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:02.191 [2024-10-01 15:19:00.711694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:02.191 [2024-10-01 15:19:00.711703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.191 [2024-10-01 15:19:00.711715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:02.191 [2024-10-01 15:19:00.711725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:02.191 [2024-10-01 15:19:00.711734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.192 [2024-10-01 15:19:00.711746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:02.192 [2024-10-01 15:19:00.711755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:02.192 [2024-10-01 15:19:00.711776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.192 [2024-10-01 15:19:00.711788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:02.192 [2024-10-01 15:19:00.711798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:02.192 [2024-10-01 15:19:00.711810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.192 [2024-10-01 15:19:00.711819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:02.192 [2024-10-01 15:19:00.711828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:02.192 [2024-10-01 15:19:00.711838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.192 [2024-10-01 15:19:00.711846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:02.192 [2024-10-01 15:19:00.711856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:02.192 [2024-10-01 15:19:00.711864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.192 [2024-10-01 15:19:00.711873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:02.192 [2024-10-01 15:19:00.711883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:02.192 [2024-10-01 15:19:00.711892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.192 [2024-10-01 15:19:00.711901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:02.192 [2024-10-01 15:19:00.711910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:02.192 [2024-10-01 15:19:00.711918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.192 [2024-10-01 15:19:00.711930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:02.192 [2024-10-01 15:19:00.711939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:02.192 [2024-10-01 15:19:00.711948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.192 [2024-10-01 15:19:00.711956] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:02.192 [2024-10-01 15:19:00.711966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:02.192 [2024-10-01 15:19:00.711982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.192 [2024-10-01 15:19:00.711992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.192 [2024-10-01 15:19:00.712003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:02.192 [2024-10-01 15:19:00.712012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:02.192 [2024-10-01 15:19:00.712022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:02.192 [2024-10-01 15:19:00.712032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:02.192 [2024-10-01 15:19:00.712041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:02.192 [2024-10-01 15:19:00.712050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:02.192 [2024-10-01 15:19:00.712061] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:02.192 [2024-10-01 15:19:00.712073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.192 [2024-10-01 15:19:00.712084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:02.192 [2024-10-01 15:19:00.712098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:02.192 [2024-10-01 15:19:00.712108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:02.192 [2024-10-01 15:19:00.712119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:02.192 [2024-10-01 15:19:00.712129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:02.192 [2024-10-01 15:19:00.712139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:02.192 [2024-10-01 15:19:00.712149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:02.192 [2024-10-01 15:19:00.712159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:02.192 [2024-10-01 15:19:00.712179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:02.192 [2024-10-01 15:19:00.712190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:02.192 [2024-10-01 15:19:00.712200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:02.192 [2024-10-01 15:19:00.712211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:02.192 [2024-10-01 15:19:00.712221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:02.192 [2024-10-01 15:19:00.712231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:02.192 [2024-10-01 15:19:00.712241] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:02.192 [2024-10-01 15:19:00.712253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.192 [2024-10-01 15:19:00.712264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:02.192 [2024-10-01 15:19:00.712277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:02.192 [2024-10-01 15:19:00.712288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:02.192 [2024-10-01 15:19:00.712298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:02.192 [2024-10-01 15:19:00.712309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.192 [2024-10-01 15:19:00.712319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:02.192 [2024-10-01 15:19:00.712332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:19:02.192 [2024-10-01 15:19:00.712342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.740388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.740436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.451 [2024-10-01 15:19:00.740456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.029 ms 00:19:02.451 [2024-10-01 15:19:00.740471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.740636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.740653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:02.451 [2024-10-01 15:19:00.740668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:02.451 [2024-10-01 15:19:00.740687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.753758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.753805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.451 [2024-10-01 15:19:00.753821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.060 ms 00:19:02.451 [2024-10-01 15:19:00.753834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.753933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.753948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.451 [2024-10-01 15:19:00.753966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:02.451 [2024-10-01 15:19:00.753978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.754451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.754471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.451 [2024-10-01 15:19:00.754484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:19:02.451 [2024-10-01 15:19:00.754495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.754631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.754646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.451 [2024-10-01 15:19:00.754659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:02.451 [2024-10-01 15:19:00.754674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.761277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.761325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.451 [2024-10-01 15:19:00.761340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.586 ms 00:19:02.451 [2024-10-01 15:19:00.761352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.763981] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:02.451 [2024-10-01 15:19:00.764130] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:02.451 [2024-10-01 15:19:00.764150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.764162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:02.451 [2024-10-01 15:19:00.764197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.700 ms 00:19:02.451 [2024-10-01 15:19:00.764207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.777475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.777512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:02.451 [2024-10-01 15:19:00.777527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.220 ms 00:19:02.451 [2024-10-01 15:19:00.777538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.779280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.779312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:02.451 [2024-10-01 15:19:00.779324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.653 ms 00:19:02.451 [2024-10-01 15:19:00.779334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.780681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.780712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:02.451 [2024-10-01 15:19:00.780732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:19:02.451 [2024-10-01 15:19:00.780742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.781032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.781059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:02.451 [2024-10-01 15:19:00.781071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:19:02.451 [2024-10-01 15:19:00.781085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.800875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.800949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:02.451 [2024-10-01 15:19:00.800967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.796 ms 00:19:02.451 [2024-10-01 15:19:00.800978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.807266] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:02.451 [2024-10-01 15:19:00.823417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.823471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:02.451 [2024-10-01 15:19:00.823486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.397 ms 00:19:02.451 [2024-10-01 15:19:00.823497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.823596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.823610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:02.451 [2024-10-01 15:19:00.823622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:02.451 [2024-10-01 15:19:00.823633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.823701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.823713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:02.451 [2024-10-01 15:19:00.823723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:02.451 [2024-10-01 15:19:00.823735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.823758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.823769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:02.451 [2024-10-01 15:19:00.823779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.451 [2024-10-01 15:19:00.823788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.823825] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:02.451 [2024-10-01 15:19:00.823846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.823860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:02.451 [2024-10-01 15:19:00.823871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:02.451 [2024-10-01 15:19:00.823880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.827487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.827623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:02.451 [2024-10-01 15:19:00.827644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.590 ms 00:19:02.451 [2024-10-01 15:19:00.827663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.827758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.451 [2024-10-01 15:19:00.827775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:02.451 [2024-10-01 15:19:00.827787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:02.451 [2024-10-01 15:19:00.827797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.451 [2024-10-01 15:19:00.828966] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:02.451 [2024-10-01 15:19:00.829905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.032 ms, result 0 00:19:02.451 [2024-10-01 15:19:00.830529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:02.451 [2024-10-01 15:19:00.840164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.183  Copying: 30/256 [MB] (30 MBps) Copying: 58/256 [MB] (28 MBps) Copying: 85/256 [MB] (26 MBps) Copying: 112/256 [MB] (26 MBps) Copying: 139/256 [MB] (27 MBps) Copying: 167/256 [MB] (28 MBps) Copying: 194/256 [MB] (26 MBps) Copying: 219/256 [MB] (25 MBps) Copying: 247/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-10-01 15:19:10.664436] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.183 [2024-10-01 15:19:10.666473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.183 [2024-10-01 15:19:10.666560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:12.183 [2024-10-01 15:19:10.666597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:12.183 [2024-10-01 15:19:10.666632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.666689] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:12.184 [2024-10-01 15:19:10.667530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.667911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:12.184 [2024-10-01 15:19:10.667956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:19:12.184 [2024-10-01 15:19:10.667985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.668538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.668594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:12.184 [2024-10-01 15:19:10.668621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:19:12.184 [2024-10-01 15:19:10.668642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.673286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.673346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.184 [2024-10-01 15:19:10.673364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.614 ms 00:19:12.184 [2024-10-01 15:19:10.673378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.681601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.681672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:12.184 [2024-10-01 15:19:10.681691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.194 ms 00:19:12.184 [2024-10-01 15:19:10.681708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.683753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.683956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.184 [2024-10-01 15:19:10.683981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.914 ms 00:19:12.184 [2024-10-01 15:19:10.684017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.688567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.688620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.184 [2024-10-01 15:19:10.688646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.496 ms 00:19:12.184 [2024-10-01 15:19:10.688668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.688806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.688824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.184 [2024-10-01 15:19:10.688842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:12.184 [2024-10-01 15:19:10.688859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.691133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.691188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:12.184 [2024-10-01 15:19:10.691206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:19:12.184 [2024-10-01 15:19:10.691219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.692962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.693007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:12.184 [2024-10-01 15:19:10.693025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:19:12.184 [2024-10-01 15:19:10.693041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.694249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.694291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:12.184 [2024-10-01 15:19:10.694305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:19:12.184 [2024-10-01 15:19:10.694317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.695504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.184 [2024-10-01 15:19:10.695546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:12.184 [2024-10-01 15:19:10.695562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:19:12.184 [2024-10-01 15:19:10.695575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.184 [2024-10-01 15:19:10.695616] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:12.184 [2024-10-01 15:19:10.695676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.695992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:12.184 [2024-10-01 15:19:10.696552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.696985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:12.185 [2024-10-01 15:19:10.697376] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:12.185 [2024-10-01 15:19:10.697390] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cd477c1e-ca00-4fe0-8c49-749bfe193526 00:19:12.185 [2024-10-01 15:19:10.697417] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:12.185 [2024-10-01 15:19:10.697599] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:12.185 [2024-10-01 15:19:10.697613] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:12.185 [2024-10-01 15:19:10.697626] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:12.185 [2024-10-01 15:19:10.697638] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:12.185 [2024-10-01 15:19:10.697659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:12.185 [2024-10-01 15:19:10.697672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:12.185 [2024-10-01 15:19:10.697684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:12.185 [2024-10-01 15:19:10.697696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:12.185 [2024-10-01 15:19:10.697713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.185 [2024-10-01 15:19:10.697729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:12.185 [2024-10-01 15:19:10.697750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:19:12.185 [2024-10-01 15:19:10.697763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.185 [2024-10-01 15:19:10.699661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.185 [2024-10-01 15:19:10.699707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:12.185 [2024-10-01 15:19:10.699720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:19:12.185 [2024-10-01 15:19:10.699746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.185 [2024-10-01 15:19:10.699870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.185 [2024-10-01 15:19:10.699896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:12.185 [2024-10-01 15:19:10.699912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:12.185 [2024-10-01 15:19:10.699923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.185 [2024-10-01 15:19:10.706968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.185 [2024-10-01 15:19:10.707144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.185 [2024-10-01 15:19:10.707203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.185 [2024-10-01 15:19:10.707222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.185 [2024-10-01 15:19:10.707336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.185 [2024-10-01 15:19:10.707360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.185 [2024-10-01 15:19:10.707375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.185 [2024-10-01 15:19:10.707391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.185 [2024-10-01 15:19:10.707469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.185 [2024-10-01 15:19:10.707488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.185 [2024-10-01 15:19:10.707506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.185 [2024-10-01 15:19:10.707528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.185 [2024-10-01 15:19:10.707552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.185 [2024-10-01 15:19:10.707566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.185 [2024-10-01 15:19:10.707583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.185 [2024-10-01 15:19:10.707597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.185 [2024-10-01 15:19:10.723589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.185 [2024-10-01 15:19:10.723669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.185 [2024-10-01 15:19:10.723687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.185 [2024-10-01 15:19:10.723698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.445 [2024-10-01 15:19:10.734142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.445 [2024-10-01 15:19:10.734231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.445 [2024-10-01 15:19:10.734250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.445 [2024-10-01 15:19:10.734263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.445 [2024-10-01 15:19:10.734356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.445 [2024-10-01 15:19:10.734372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.445 [2024-10-01 15:19:10.734386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.445 [2024-10-01 15:19:10.734399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.445 [2024-10-01 15:19:10.734432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.445 [2024-10-01 15:19:10.734446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.445 [2024-10-01 15:19:10.734461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.445 [2024-10-01 15:19:10.734483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.445 [2024-10-01 15:19:10.734584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.445 [2024-10-01 15:19:10.734604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.445 [2024-10-01 15:19:10.734623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.445 [2024-10-01 15:19:10.734651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.445 [2024-10-01 15:19:10.734698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.445 [2024-10-01 15:19:10.734713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:12.445 [2024-10-01 15:19:10.734726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.445 [2024-10-01 15:19:10.734739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.445 [2024-10-01 15:19:10.734790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.445 [2024-10-01 15:19:10.734805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.445 [2024-10-01 15:19:10.734818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.445 [2024-10-01 15:19:10.734831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.445 [2024-10-01 15:19:10.734890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.445 [2024-10-01 15:19:10.734906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.445 [2024-10-01 15:19:10.734920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.445 [2024-10-01 15:19:10.734937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.445 [2024-10-01 15:19:10.735116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.730 ms, result 0 00:19:12.445 00:19:12.445 00:19:12.703 15:19:11 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:12.962 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:12.962 15:19:11 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:12.962 15:19:11 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:12.962 15:19:11 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:12.962 15:19:11 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:12.962 15:19:11 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:13.220 15:19:11 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:13.220 15:19:11 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 87140 00:19:13.220 Process with pid 87140 is not found 00:19:13.220 15:19:11 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 87140 ']' 00:19:13.220 15:19:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 87140 00:19:13.220 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (87140) - No such process 00:19:13.220 15:19:11 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 87140 is not found' 00:19:13.220 ************************************ 00:19:13.220 END TEST ftl_trim 00:19:13.220 ************************************ 00:19:13.220 00:19:13.220 real 0m52.177s 00:19:13.220 user 1m13.012s 00:19:13.220 sys 0m6.389s 00:19:13.221 15:19:11 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:13.221 15:19:11 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:13.221 15:19:11 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:13.221 15:19:11 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:13.221 15:19:11 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:13.221 15:19:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:13.221 ************************************ 00:19:13.221 START TEST ftl_restore 00:19:13.221 ************************************ 00:19:13.221 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:13.480 * Looking for test storage... 00:19:13.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.480 15:19:11 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:13.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.480 --rc genhtml_branch_coverage=1 00:19:13.480 --rc genhtml_function_coverage=1 00:19:13.480 --rc genhtml_legend=1 00:19:13.480 --rc geninfo_all_blocks=1 00:19:13.480 --rc geninfo_unexecuted_blocks=1 00:19:13.480 00:19:13.480 ' 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:13.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.480 --rc genhtml_branch_coverage=1 00:19:13.480 --rc genhtml_function_coverage=1 00:19:13.480 --rc genhtml_legend=1 00:19:13.480 --rc geninfo_all_blocks=1 00:19:13.480 --rc geninfo_unexecuted_blocks=1 00:19:13.480 00:19:13.480 ' 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:13.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.480 --rc genhtml_branch_coverage=1 00:19:13.480 --rc genhtml_function_coverage=1 00:19:13.480 --rc genhtml_legend=1 00:19:13.480 --rc geninfo_all_blocks=1 00:19:13.480 --rc geninfo_unexecuted_blocks=1 00:19:13.480 00:19:13.480 ' 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:13.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.480 --rc genhtml_branch_coverage=1 00:19:13.480 --rc genhtml_function_coverage=1 00:19:13.480 --rc genhtml_legend=1 00:19:13.480 --rc geninfo_all_blocks=1 00:19:13.480 --rc geninfo_unexecuted_blocks=1 00:19:13.480 00:19:13.480 ' 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.36kb7mFLGR 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=87364 00:19:13.480 15:19:11 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 87364 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 87364 ']' 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:13.480 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.481 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:13.481 15:19:11 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:13.739 [2024-10-01 15:19:12.057071] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:19:13.739 [2024-10-01 15:19:12.057423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87364 ] 00:19:13.739 [2024-10-01 15:19:12.225987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.739 [2024-10-01 15:19:12.272015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.674 15:19:12 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.674 15:19:12 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:19:14.674 15:19:12 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:14.674 15:19:12 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:14.674 15:19:12 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:14.674 15:19:12 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:14.674 15:19:12 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:14.674 15:19:12 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:14.674 15:19:13 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:14.674 15:19:13 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:14.674 15:19:13 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:14.674 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:14.674 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:14.674 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:14.674 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:14.674 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:14.933 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:14.933 { 00:19:14.933 "name": "nvme0n1", 00:19:14.933 "aliases": [ 00:19:14.933 "d4ef1c5b-c5eb-44d3-ad8d-d8c72f046d78" 00:19:14.933 ], 00:19:14.933 "product_name": "NVMe disk", 00:19:14.933 "block_size": 4096, 00:19:14.933 "num_blocks": 1310720, 00:19:14.933 "uuid": "d4ef1c5b-c5eb-44d3-ad8d-d8c72f046d78", 00:19:14.933 "numa_id": -1, 00:19:14.933 "assigned_rate_limits": { 00:19:14.933 "rw_ios_per_sec": 0, 00:19:14.933 "rw_mbytes_per_sec": 0, 00:19:14.933 "r_mbytes_per_sec": 0, 00:19:14.933 "w_mbytes_per_sec": 0 00:19:14.933 }, 00:19:14.933 "claimed": true, 00:19:14.933 "claim_type": "read_many_write_one", 00:19:14.933 "zoned": false, 00:19:14.933 "supported_io_types": { 00:19:14.933 "read": true, 00:19:14.933 "write": true, 00:19:14.933 "unmap": true, 00:19:14.933 "flush": true, 00:19:14.933 "reset": true, 00:19:14.933 "nvme_admin": true, 00:19:14.933 "nvme_io": true, 00:19:14.933 "nvme_io_md": false, 00:19:14.933 "write_zeroes": true, 00:19:14.933 "zcopy": false, 00:19:14.933 "get_zone_info": false, 00:19:14.933 "zone_management": false, 00:19:14.933 "zone_append": false, 00:19:14.933 "compare": true, 00:19:14.933 "compare_and_write": false, 00:19:14.933 "abort": true, 00:19:14.933 "seek_hole": false, 00:19:14.933 "seek_data": false, 00:19:14.933 "copy": true, 00:19:14.933 "nvme_iov_md": false 00:19:14.933 }, 00:19:14.933 "driver_specific": { 00:19:14.933 "nvme": [ 00:19:14.933 { 00:19:14.933 "pci_address": "0000:00:11.0", 00:19:14.933 "trid": { 00:19:14.933 "trtype": "PCIe", 00:19:14.933 "traddr": "0000:00:11.0" 00:19:14.933 }, 00:19:14.933 "ctrlr_data": { 00:19:14.933 "cntlid": 0, 00:19:14.933 "vendor_id": "0x1b36", 00:19:14.933 "model_number": "QEMU NVMe Ctrl", 00:19:14.933 "serial_number": "12341", 00:19:14.933 "firmware_revision": "8.0.0", 00:19:14.933 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:14.933 "oacs": { 00:19:14.933 "security": 0, 00:19:14.933 "format": 1, 00:19:14.933 "firmware": 0, 00:19:14.933 "ns_manage": 1 00:19:14.933 }, 00:19:14.933 "multi_ctrlr": false, 00:19:14.933 "ana_reporting": false 00:19:14.933 }, 00:19:14.933 "vs": { 00:19:14.933 "nvme_version": "1.4" 00:19:14.933 }, 00:19:14.933 "ns_data": { 00:19:14.933 "id": 1, 00:19:14.933 "can_share": false 00:19:14.933 } 00:19:14.933 } 00:19:14.933 ], 00:19:14.933 "mp_policy": "active_passive" 00:19:14.933 } 00:19:14.933 } 00:19:14.933 ]' 00:19:14.933 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:15.192 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:15.192 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:15.192 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:15.192 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:15.192 15:19:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:19:15.192 15:19:13 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:15.192 15:19:13 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:15.192 15:19:13 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:15.192 15:19:13 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:15.192 15:19:13 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:15.450 15:19:13 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=87165344-2743-40ea-934b-bffc6c5a5297 00:19:15.450 15:19:13 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:15.450 15:19:13 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 87165344-2743-40ea-934b-bffc6c5a5297 00:19:15.450 15:19:13 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:15.709 15:19:14 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=7c03b76c-354b-443e-a3f3-df6b58c8ae8b 00:19:15.709 15:19:14 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7c03b76c-354b-443e-a3f3-df6b58c8ae8b 00:19:15.967 15:19:14 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:15.967 15:19:14 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:15.967 15:19:14 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:15.967 15:19:14 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:15.967 15:19:14 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:15.967 15:19:14 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:15.967 15:19:14 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:15.967 15:19:14 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:15.967 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:15.967 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:15.967 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:15.967 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:15.967 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:16.225 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:16.225 { 00:19:16.225 "name": "25e6e272-0a2f-4e0c-824c-91fe3e12afa9", 00:19:16.225 "aliases": [ 00:19:16.225 "lvs/nvme0n1p0" 00:19:16.225 ], 00:19:16.225 "product_name": "Logical Volume", 00:19:16.225 "block_size": 4096, 00:19:16.225 "num_blocks": 26476544, 00:19:16.225 "uuid": "25e6e272-0a2f-4e0c-824c-91fe3e12afa9", 00:19:16.225 "assigned_rate_limits": { 00:19:16.225 "rw_ios_per_sec": 0, 00:19:16.225 "rw_mbytes_per_sec": 0, 00:19:16.225 "r_mbytes_per_sec": 0, 00:19:16.225 "w_mbytes_per_sec": 0 00:19:16.225 }, 00:19:16.225 "claimed": false, 00:19:16.225 "zoned": false, 00:19:16.225 "supported_io_types": { 00:19:16.225 "read": true, 00:19:16.225 "write": true, 00:19:16.225 "unmap": true, 00:19:16.225 "flush": false, 00:19:16.225 "reset": true, 00:19:16.225 "nvme_admin": false, 00:19:16.225 "nvme_io": false, 00:19:16.225 "nvme_io_md": false, 00:19:16.225 "write_zeroes": true, 00:19:16.225 "zcopy": false, 00:19:16.225 "get_zone_info": false, 00:19:16.225 "zone_management": false, 00:19:16.225 "zone_append": false, 00:19:16.225 "compare": false, 00:19:16.225 "compare_and_write": false, 00:19:16.225 "abort": false, 00:19:16.225 "seek_hole": true, 00:19:16.225 "seek_data": true, 00:19:16.225 "copy": false, 00:19:16.225 "nvme_iov_md": false 00:19:16.225 }, 00:19:16.225 "driver_specific": { 00:19:16.225 "lvol": { 00:19:16.225 "lvol_store_uuid": "7c03b76c-354b-443e-a3f3-df6b58c8ae8b", 00:19:16.225 "base_bdev": "nvme0n1", 00:19:16.225 "thin_provision": true, 00:19:16.225 "num_allocated_clusters": 0, 00:19:16.225 "snapshot": false, 00:19:16.225 "clone": false, 00:19:16.225 "esnap_clone": false 00:19:16.225 } 00:19:16.225 } 00:19:16.225 } 00:19:16.225 ]' 00:19:16.225 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:16.225 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:16.225 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:16.225 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:16.225 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:16.225 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:16.225 15:19:14 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:16.225 15:19:14 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:16.225 15:19:14 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:16.484 15:19:14 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:16.484 15:19:14 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:16.484 15:19:14 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:16.484 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:16.484 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:16.485 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:16.485 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:16.485 15:19:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:16.744 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:16.744 { 00:19:16.744 "name": "25e6e272-0a2f-4e0c-824c-91fe3e12afa9", 00:19:16.744 "aliases": [ 00:19:16.744 "lvs/nvme0n1p0" 00:19:16.744 ], 00:19:16.744 "product_name": "Logical Volume", 00:19:16.744 "block_size": 4096, 00:19:16.744 "num_blocks": 26476544, 00:19:16.744 "uuid": "25e6e272-0a2f-4e0c-824c-91fe3e12afa9", 00:19:16.744 "assigned_rate_limits": { 00:19:16.744 "rw_ios_per_sec": 0, 00:19:16.744 "rw_mbytes_per_sec": 0, 00:19:16.744 "r_mbytes_per_sec": 0, 00:19:16.744 "w_mbytes_per_sec": 0 00:19:16.744 }, 00:19:16.744 "claimed": false, 00:19:16.744 "zoned": false, 00:19:16.744 "supported_io_types": { 00:19:16.744 "read": true, 00:19:16.744 "write": true, 00:19:16.744 "unmap": true, 00:19:16.744 "flush": false, 00:19:16.744 "reset": true, 00:19:16.744 "nvme_admin": false, 00:19:16.744 "nvme_io": false, 00:19:16.744 "nvme_io_md": false, 00:19:16.744 "write_zeroes": true, 00:19:16.744 "zcopy": false, 00:19:16.744 "get_zone_info": false, 00:19:16.744 "zone_management": false, 00:19:16.744 "zone_append": false, 00:19:16.744 "compare": false, 00:19:16.744 "compare_and_write": false, 00:19:16.744 "abort": false, 00:19:16.744 "seek_hole": true, 00:19:16.744 "seek_data": true, 00:19:16.744 "copy": false, 00:19:16.744 "nvme_iov_md": false 00:19:16.744 }, 00:19:16.744 "driver_specific": { 00:19:16.744 "lvol": { 00:19:16.744 "lvol_store_uuid": "7c03b76c-354b-443e-a3f3-df6b58c8ae8b", 00:19:16.744 "base_bdev": "nvme0n1", 00:19:16.744 "thin_provision": true, 00:19:16.744 "num_allocated_clusters": 0, 00:19:16.744 "snapshot": false, 00:19:16.744 "clone": false, 00:19:16.744 "esnap_clone": false 00:19:16.744 } 00:19:16.744 } 00:19:16.744 } 00:19:16.744 ]' 00:19:16.744 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:16.744 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:16.744 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:16.744 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:16.744 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:16.744 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:16.744 15:19:15 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:16.744 15:19:15 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:17.003 15:19:15 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:17.003 15:19:15 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:17.003 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:17.003 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:17.003 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:17.003 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:17.003 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 25e6e272-0a2f-4e0c-824c-91fe3e12afa9 00:19:17.261 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:17.261 { 00:19:17.261 "name": "25e6e272-0a2f-4e0c-824c-91fe3e12afa9", 00:19:17.261 "aliases": [ 00:19:17.261 "lvs/nvme0n1p0" 00:19:17.261 ], 00:19:17.261 "product_name": "Logical Volume", 00:19:17.261 "block_size": 4096, 00:19:17.261 "num_blocks": 26476544, 00:19:17.261 "uuid": "25e6e272-0a2f-4e0c-824c-91fe3e12afa9", 00:19:17.261 "assigned_rate_limits": { 00:19:17.261 "rw_ios_per_sec": 0, 00:19:17.261 "rw_mbytes_per_sec": 0, 00:19:17.261 "r_mbytes_per_sec": 0, 00:19:17.261 "w_mbytes_per_sec": 0 00:19:17.261 }, 00:19:17.261 "claimed": false, 00:19:17.261 "zoned": false, 00:19:17.261 "supported_io_types": { 00:19:17.261 "read": true, 00:19:17.261 "write": true, 00:19:17.261 "unmap": true, 00:19:17.261 "flush": false, 00:19:17.261 "reset": true, 00:19:17.261 "nvme_admin": false, 00:19:17.261 "nvme_io": false, 00:19:17.261 "nvme_io_md": false, 00:19:17.261 "write_zeroes": true, 00:19:17.261 "zcopy": false, 00:19:17.261 "get_zone_info": false, 00:19:17.261 "zone_management": false, 00:19:17.261 "zone_append": false, 00:19:17.261 "compare": false, 00:19:17.261 "compare_and_write": false, 00:19:17.261 "abort": false, 00:19:17.261 "seek_hole": true, 00:19:17.261 "seek_data": true, 00:19:17.261 "copy": false, 00:19:17.261 "nvme_iov_md": false 00:19:17.261 }, 00:19:17.261 "driver_specific": { 00:19:17.261 "lvol": { 00:19:17.261 "lvol_store_uuid": "7c03b76c-354b-443e-a3f3-df6b58c8ae8b", 00:19:17.261 "base_bdev": "nvme0n1", 00:19:17.261 "thin_provision": true, 00:19:17.261 "num_allocated_clusters": 0, 00:19:17.261 "snapshot": false, 00:19:17.261 "clone": false, 00:19:17.261 "esnap_clone": false 00:19:17.261 } 00:19:17.261 } 00:19:17.261 } 00:19:17.261 ]' 00:19:17.261 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:17.261 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:17.261 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:17.524 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:17.524 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:17.524 15:19:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:17.524 15:19:15 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:17.524 15:19:15 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 25e6e272-0a2f-4e0c-824c-91fe3e12afa9 --l2p_dram_limit 10' 00:19:17.524 15:19:15 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:17.524 15:19:15 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:17.524 15:19:15 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:17.524 15:19:15 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:17.525 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:17.525 15:19:15 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 25e6e272-0a2f-4e0c-824c-91fe3e12afa9 --l2p_dram_limit 10 -c nvc0n1p0 00:19:17.525 [2024-10-01 15:19:16.028523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.028596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:17.525 [2024-10-01 15:19:16.028615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:17.525 [2024-10-01 15:19:16.028629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.028699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.028715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.525 [2024-10-01 15:19:16.028726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:17.525 [2024-10-01 15:19:16.028743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.028787] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:17.525 [2024-10-01 15:19:16.029149] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:17.525 [2024-10-01 15:19:16.029184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.029198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.525 [2024-10-01 15:19:16.029212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:19:17.525 [2024-10-01 15:19:16.029235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.029400] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a75a8b7f-e542-4dc7-afc8-62fa7d0b6a59 00:19:17.525 [2024-10-01 15:19:16.030857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.030892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:17.525 [2024-10-01 15:19:16.030908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:17.525 [2024-10-01 15:19:16.030919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.038333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.038373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.525 [2024-10-01 15:19:16.038390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.363 ms 00:19:17.525 [2024-10-01 15:19:16.038401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.038489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.038501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.525 [2024-10-01 15:19:16.038515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:17.525 [2024-10-01 15:19:16.038529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.038582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.038594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:17.525 [2024-10-01 15:19:16.038607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:17.525 [2024-10-01 15:19:16.038617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.038646] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:17.525 [2024-10-01 15:19:16.040529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.040566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.525 [2024-10-01 15:19:16.040583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:19:17.525 [2024-10-01 15:19:16.040598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.040633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.040661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:17.525 [2024-10-01 15:19:16.040673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:17.525 [2024-10-01 15:19:16.040698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.040731] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:17.525 [2024-10-01 15:19:16.040884] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:17.525 [2024-10-01 15:19:16.040898] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:17.525 [2024-10-01 15:19:16.040914] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:17.525 [2024-10-01 15:19:16.040927] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:17.525 [2024-10-01 15:19:16.040941] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:17.525 [2024-10-01 15:19:16.040952] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:17.525 [2024-10-01 15:19:16.040971] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:17.525 [2024-10-01 15:19:16.040981] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:17.525 [2024-10-01 15:19:16.040993] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:17.525 [2024-10-01 15:19:16.041006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.041019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:17.525 [2024-10-01 15:19:16.041030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:19:17.525 [2024-10-01 15:19:16.041042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.041115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.525 [2024-10-01 15:19:16.041131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:17.525 [2024-10-01 15:19:16.041142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:17.525 [2024-10-01 15:19:16.041154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.525 [2024-10-01 15:19:16.041268] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:17.525 [2024-10-01 15:19:16.041288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:17.525 [2024-10-01 15:19:16.041299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.525 [2024-10-01 15:19:16.041312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:17.525 [2024-10-01 15:19:16.041335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:17.525 [2024-10-01 15:19:16.041356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:17.525 [2024-10-01 15:19:16.041366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.525 [2024-10-01 15:19:16.041387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:17.525 [2024-10-01 15:19:16.041400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:17.525 [2024-10-01 15:19:16.041410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.525 [2024-10-01 15:19:16.041425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:17.525 [2024-10-01 15:19:16.041435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:17.525 [2024-10-01 15:19:16.041447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:17.525 [2024-10-01 15:19:16.041469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:17.525 [2024-10-01 15:19:16.041478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:17.525 [2024-10-01 15:19:16.041500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.525 [2024-10-01 15:19:16.041521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:17.525 [2024-10-01 15:19:16.041533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.525 [2024-10-01 15:19:16.041554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:17.525 [2024-10-01 15:19:16.041563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.525 [2024-10-01 15:19:16.041584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:17.525 [2024-10-01 15:19:16.041598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.525 [2024-10-01 15:19:16.041618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:17.525 [2024-10-01 15:19:16.041627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.525 [2024-10-01 15:19:16.041647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:17.525 [2024-10-01 15:19:16.041659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:17.525 [2024-10-01 15:19:16.041668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.525 [2024-10-01 15:19:16.041680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:17.525 [2024-10-01 15:19:16.041689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:17.525 [2024-10-01 15:19:16.041702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:17.525 [2024-10-01 15:19:16.041722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:17.525 [2024-10-01 15:19:16.041731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.525 [2024-10-01 15:19:16.041743] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:17.525 [2024-10-01 15:19:16.041753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:17.525 [2024-10-01 15:19:16.041768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.526 [2024-10-01 15:19:16.041777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.526 [2024-10-01 15:19:16.041790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:17.526 [2024-10-01 15:19:16.041800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:17.526 [2024-10-01 15:19:16.041812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:17.526 [2024-10-01 15:19:16.041822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:17.526 [2024-10-01 15:19:16.041834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:17.526 [2024-10-01 15:19:16.041843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:17.526 [2024-10-01 15:19:16.041859] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:17.526 [2024-10-01 15:19:16.041872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.526 [2024-10-01 15:19:16.041886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:17.526 [2024-10-01 15:19:16.041897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:17.526 [2024-10-01 15:19:16.041910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:17.526 [2024-10-01 15:19:16.041920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:17.526 [2024-10-01 15:19:16.041933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:17.526 [2024-10-01 15:19:16.041943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:17.526 [2024-10-01 15:19:16.041958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:17.526 [2024-10-01 15:19:16.041968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:17.526 [2024-10-01 15:19:16.041982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:17.526 [2024-10-01 15:19:16.041993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:17.526 [2024-10-01 15:19:16.042005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:17.526 [2024-10-01 15:19:16.042015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:17.526 [2024-10-01 15:19:16.042028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:17.526 [2024-10-01 15:19:16.042038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:17.526 [2024-10-01 15:19:16.042050] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:17.526 [2024-10-01 15:19:16.042064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.526 [2024-10-01 15:19:16.042078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:17.526 [2024-10-01 15:19:16.042089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:17.526 [2024-10-01 15:19:16.042103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:17.526 [2024-10-01 15:19:16.042119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:17.526 [2024-10-01 15:19:16.042133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.526 [2024-10-01 15:19:16.042144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:17.526 [2024-10-01 15:19:16.042159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:19:17.526 [2024-10-01 15:19:16.042178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.526 [2024-10-01 15:19:16.042225] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:17.526 [2024-10-01 15:19:16.042238] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:20.813 [2024-10-01 15:19:18.829795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.813 [2024-10-01 15:19:18.829868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:20.813 [2024-10-01 15:19:18.829893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2792.089 ms 00:19:20.813 [2024-10-01 15:19:18.829904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.813 [2024-10-01 15:19:18.841028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.813 [2024-10-01 15:19:18.841084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.814 [2024-10-01 15:19:18.841104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.039 ms 00:19:20.814 [2024-10-01 15:19:18.841115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.841246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.841260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:20.814 [2024-10-01 15:19:18.841277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:20.814 [2024-10-01 15:19:18.841298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.851680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.851726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.814 [2024-10-01 15:19:18.851745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.345 ms 00:19:20.814 [2024-10-01 15:19:18.851756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.851795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.851813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.814 [2024-10-01 15:19:18.851832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:20.814 [2024-10-01 15:19:18.851842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.852317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.852332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.814 [2024-10-01 15:19:18.852345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:19:20.814 [2024-10-01 15:19:18.852356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.852460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.852471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.814 [2024-10-01 15:19:18.852484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:20.814 [2024-10-01 15:19:18.852504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.868119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.868182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.814 [2024-10-01 15:19:18.868204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.614 ms 00:19:20.814 [2024-10-01 15:19:18.868218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.876836] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:20.814 [2024-10-01 15:19:18.879982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.880015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.814 [2024-10-01 15:19:18.880029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.680 ms 00:19:20.814 [2024-10-01 15:19:18.880042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.939004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.939088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:20.814 [2024-10-01 15:19:18.939106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.025 ms 00:19:20.814 [2024-10-01 15:19:18.939131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.939340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.939357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.814 [2024-10-01 15:19:18.939369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:20.814 [2024-10-01 15:19:18.939382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.942922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.942965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:20.814 [2024-10-01 15:19:18.942979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.524 ms 00:19:20.814 [2024-10-01 15:19:18.942992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.945730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.945770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:20.814 [2024-10-01 15:19:18.945783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:19:20.814 [2024-10-01 15:19:18.945795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.946068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.946084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.814 [2024-10-01 15:19:18.946096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:19:20.814 [2024-10-01 15:19:18.946111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.977535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.977586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:20.814 [2024-10-01 15:19:18.977601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.452 ms 00:19:20.814 [2024-10-01 15:19:18.977615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.981812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.981854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:20.814 [2024-10-01 15:19:18.981868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:19:20.814 [2024-10-01 15:19:18.981882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.984885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.984925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:20.814 [2024-10-01 15:19:18.984937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:19:20.814 [2024-10-01 15:19:18.984950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.988570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.988613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.814 [2024-10-01 15:19:18.988627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.591 ms 00:19:20.814 [2024-10-01 15:19:18.988645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.988687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.988703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.814 [2024-10-01 15:19:18.988715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:20.814 [2024-10-01 15:19:18.988729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.988813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.814 [2024-10-01 15:19:18.988828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.814 [2024-10-01 15:19:18.988847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:20.814 [2024-10-01 15:19:18.988871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.814 [2024-10-01 15:19:18.989902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2965.814 ms, result 0 00:19:20.814 { 00:19:20.814 "name": "ftl0", 00:19:20.814 "uuid": "a75a8b7f-e542-4dc7-afc8-62fa7d0b6a59" 00:19:20.814 } 00:19:20.814 15:19:19 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:20.814 15:19:19 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:20.814 15:19:19 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:20.814 15:19:19 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:21.073 [2024-10-01 15:19:19.425048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.073 [2024-10-01 15:19:19.425100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:21.073 [2024-10-01 15:19:19.425121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:21.073 [2024-10-01 15:19:19.425131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.073 [2024-10-01 15:19:19.425162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:21.073 [2024-10-01 15:19:19.425858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.073 [2024-10-01 15:19:19.425878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:21.073 [2024-10-01 15:19:19.425889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:19:21.073 [2024-10-01 15:19:19.425902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.073 [2024-10-01 15:19:19.426126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.073 [2024-10-01 15:19:19.426141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:21.073 [2024-10-01 15:19:19.426152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:19:21.073 [2024-10-01 15:19:19.426165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.073 [2024-10-01 15:19:19.428687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.073 [2024-10-01 15:19:19.428718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:21.073 [2024-10-01 15:19:19.428730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.493 ms 00:19:21.073 [2024-10-01 15:19:19.428743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.074 [2024-10-01 15:19:19.433719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.074 [2024-10-01 15:19:19.433757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:21.074 [2024-10-01 15:19:19.433770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.964 ms 00:19:21.074 [2024-10-01 15:19:19.433784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.074 [2024-10-01 15:19:19.435189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.074 [2024-10-01 15:19:19.435232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:21.074 [2024-10-01 15:19:19.435245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.329 ms 00:19:21.074 [2024-10-01 15:19:19.435257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.074 [2024-10-01 15:19:19.439362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.074 [2024-10-01 15:19:19.439408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:21.074 [2024-10-01 15:19:19.439422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:19:21.074 [2024-10-01 15:19:19.439434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.074 [2024-10-01 15:19:19.439544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.074 [2024-10-01 15:19:19.439560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:21.074 [2024-10-01 15:19:19.439571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:21.074 [2024-10-01 15:19:19.439586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.074 [2024-10-01 15:19:19.441270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.074 [2024-10-01 15:19:19.441305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:21.074 [2024-10-01 15:19:19.441317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.665 ms 00:19:21.074 [2024-10-01 15:19:19.441329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.074 [2024-10-01 15:19:19.442633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.074 [2024-10-01 15:19:19.442675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:21.074 [2024-10-01 15:19:19.442686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.275 ms 00:19:21.074 [2024-10-01 15:19:19.442698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.074 [2024-10-01 15:19:19.443771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.074 [2024-10-01 15:19:19.443806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:21.074 [2024-10-01 15:19:19.443818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:19:21.074 [2024-10-01 15:19:19.443831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.074 [2024-10-01 15:19:19.445042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.074 [2024-10-01 15:19:19.445082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:21.074 [2024-10-01 15:19:19.445094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:19:21.074 [2024-10-01 15:19:19.445106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.074 [2024-10-01 15:19:19.445135] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:21.074 [2024-10-01 15:19:19.445154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:21.074 [2024-10-01 15:19:19.445929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.445941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.445952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.445978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.445990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:21.075 [2024-10-01 15:19:19.446431] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:21.075 [2024-10-01 15:19:19.446442] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a75a8b7f-e542-4dc7-afc8-62fa7d0b6a59 00:19:21.075 [2024-10-01 15:19:19.446456] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:21.075 [2024-10-01 15:19:19.446466] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:21.075 [2024-10-01 15:19:19.446479] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:21.075 [2024-10-01 15:19:19.446489] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:21.075 [2024-10-01 15:19:19.446501] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:21.075 [2024-10-01 15:19:19.446511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:21.075 [2024-10-01 15:19:19.446523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:21.075 [2024-10-01 15:19:19.446532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:21.075 [2024-10-01 15:19:19.446545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:21.075 [2024-10-01 15:19:19.446557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.075 [2024-10-01 15:19:19.446570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:21.075 [2024-10-01 15:19:19.446591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:19:21.075 [2024-10-01 15:19:19.446604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.448369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.075 [2024-10-01 15:19:19.448400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:21.075 [2024-10-01 15:19:19.448411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:19:21.075 [2024-10-01 15:19:19.448424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.448546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.075 [2024-10-01 15:19:19.448561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:21.075 [2024-10-01 15:19:19.448572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:21.075 [2024-10-01 15:19:19.448583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.455521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.455556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:21.075 [2024-10-01 15:19:19.455569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.455582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.455647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.455661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:21.075 [2024-10-01 15:19:19.455672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.455685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.455762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.455782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:21.075 [2024-10-01 15:19:19.455792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.455805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.455825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.455838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:21.075 [2024-10-01 15:19:19.455851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.455864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.468527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.468589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.075 [2024-10-01 15:19:19.468604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.468619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.477871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.477926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:21.075 [2024-10-01 15:19:19.477940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.477957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.478036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.478054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:21.075 [2024-10-01 15:19:19.478065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.478078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.478116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.478131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:21.075 [2024-10-01 15:19:19.478142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.478158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.478252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.478268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:21.075 [2024-10-01 15:19:19.478279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.478291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.478328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.478343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:21.075 [2024-10-01 15:19:19.478354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.478369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.075 [2024-10-01 15:19:19.478410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.075 [2024-10-01 15:19:19.478427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:21.075 [2024-10-01 15:19:19.478437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.075 [2024-10-01 15:19:19.478450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.076 [2024-10-01 15:19:19.478494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.076 [2024-10-01 15:19:19.478508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:21.076 [2024-10-01 15:19:19.478518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.076 [2024-10-01 15:19:19.478534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.076 [2024-10-01 15:19:19.478660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.677 ms, result 0 00:19:21.076 true 00:19:21.076 15:19:19 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 87364 00:19:21.076 15:19:19 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 87364 ']' 00:19:21.076 15:19:19 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 87364 00:19:21.076 15:19:19 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:19:21.076 15:19:19 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.076 15:19:19 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87364 00:19:21.076 15:19:19 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:21.076 killing process with pid 87364 00:19:21.076 15:19:19 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:21.076 15:19:19 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87364' 00:19:21.076 15:19:19 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 87364 00:19:21.076 15:19:19 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 87364 00:19:24.397 15:19:22 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:28.593 262144+0 records in 00:19:28.593 262144+0 records out 00:19:28.593 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.18085 s, 257 MB/s 00:19:28.593 15:19:26 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:30.005 15:19:28 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:30.281 [2024-10-01 15:19:28.563825] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:19:30.281 [2024-10-01 15:19:28.563998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87573 ] 00:19:30.281 [2024-10-01 15:19:28.735408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.281 [2024-10-01 15:19:28.784742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.545 [2024-10-01 15:19:28.888567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.545 [2024-10-01 15:19:28.888641] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.545 [2024-10-01 15:19:29.048083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-10-01 15:19:29.048149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:30.545 [2024-10-01 15:19:29.048189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.545 [2024-10-01 15:19:29.048199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-10-01 15:19:29.048268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-10-01 15:19:29.048282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.545 [2024-10-01 15:19:29.048292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:30.545 [2024-10-01 15:19:29.048302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-10-01 15:19:29.048332] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:30.545 [2024-10-01 15:19:29.048617] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:30.545 [2024-10-01 15:19:29.048636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-10-01 15:19:29.048646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.545 [2024-10-01 15:19:29.048660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:19:30.545 [2024-10-01 15:19:29.048669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-10-01 15:19:29.050146] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:30.545 [2024-10-01 15:19:29.052789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-10-01 15:19:29.052828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:30.545 [2024-10-01 15:19:29.052842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:19:30.545 [2024-10-01 15:19:29.052852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-10-01 15:19:29.052916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-10-01 15:19:29.052940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:30.545 [2024-10-01 15:19:29.052950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:30.545 [2024-10-01 15:19:29.052971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-10-01 15:19:29.059811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-10-01 15:19:29.059847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.545 [2024-10-01 15:19:29.059870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.793 ms 00:19:30.545 [2024-10-01 15:19:29.059882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-10-01 15:19:29.059990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-10-01 15:19:29.060004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.545 [2024-10-01 15:19:29.060015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:30.545 [2024-10-01 15:19:29.060025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-10-01 15:19:29.060093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-10-01 15:19:29.060109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:30.545 [2024-10-01 15:19:29.060130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:30.545 [2024-10-01 15:19:29.060140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-10-01 15:19:29.060202] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.545 [2024-10-01 15:19:29.061829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-10-01 15:19:29.061858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.545 [2024-10-01 15:19:29.061870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:19:30.545 [2024-10-01 15:19:29.061890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-10-01 15:19:29.061928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-10-01 15:19:29.061939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:30.545 [2024-10-01 15:19:29.061957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:30.545 [2024-10-01 15:19:29.061967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-10-01 15:19:29.061998] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:30.545 [2024-10-01 15:19:29.062026] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:30.545 [2024-10-01 15:19:29.062064] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:30.545 [2024-10-01 15:19:29.062087] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:30.545 [2024-10-01 15:19:29.062193] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:30.545 [2024-10-01 15:19:29.062207] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:30.545 [2024-10-01 15:19:29.062229] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:30.545 [2024-10-01 15:19:29.062242] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:30.545 [2024-10-01 15:19:29.062258] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:30.545 [2024-10-01 15:19:29.062269] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:30.545 [2024-10-01 15:19:29.062287] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:30.545 [2024-10-01 15:19:29.062304] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:30.545 [2024-10-01 15:19:29.062314] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:30.546 [2024-10-01 15:19:29.062331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.546 [2024-10-01 15:19:29.062342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:30.546 [2024-10-01 15:19:29.062352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:19:30.546 [2024-10-01 15:19:29.062361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-10-01 15:19:29.062432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.546 [2024-10-01 15:19:29.062446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:30.546 [2024-10-01 15:19:29.062461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:30.546 [2024-10-01 15:19:29.062471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-10-01 15:19:29.062570] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:30.546 [2024-10-01 15:19:29.062590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:30.546 [2024-10-01 15:19:29.062600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.546 [2024-10-01 15:19:29.062624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:30.546 [2024-10-01 15:19:29.062656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:30.546 [2024-10-01 15:19:29.062678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:30.546 [2024-10-01 15:19:29.062688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.546 [2024-10-01 15:19:29.062706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:30.546 [2024-10-01 15:19:29.062716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:30.546 [2024-10-01 15:19:29.062726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.546 [2024-10-01 15:19:29.062739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:30.546 [2024-10-01 15:19:29.062749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:30.546 [2024-10-01 15:19:29.062758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:30.546 [2024-10-01 15:19:29.062776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:30.546 [2024-10-01 15:19:29.062785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:30.546 [2024-10-01 15:19:29.062805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.546 [2024-10-01 15:19:29.062823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:30.546 [2024-10-01 15:19:29.062832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.546 [2024-10-01 15:19:29.062850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:30.546 [2024-10-01 15:19:29.062859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.546 [2024-10-01 15:19:29.062877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:30.546 [2024-10-01 15:19:29.062891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.546 [2024-10-01 15:19:29.062910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:30.546 [2024-10-01 15:19:29.062919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.546 [2024-10-01 15:19:29.062936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:30.546 [2024-10-01 15:19:29.062945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:30.546 [2024-10-01 15:19:29.062954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.546 [2024-10-01 15:19:29.062963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:30.546 [2024-10-01 15:19:29.062971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:30.546 [2024-10-01 15:19:29.062980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.546 [2024-10-01 15:19:29.062989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:30.546 [2024-10-01 15:19:29.062998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:30.546 [2024-10-01 15:19:29.063007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.546 [2024-10-01 15:19:29.063017] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:30.546 [2024-10-01 15:19:29.063027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:30.546 [2024-10-01 15:19:29.063047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.546 [2024-10-01 15:19:29.063060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.546 [2024-10-01 15:19:29.063069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:30.546 [2024-10-01 15:19:29.063079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:30.546 [2024-10-01 15:19:29.063087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:30.546 [2024-10-01 15:19:29.063096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:30.546 [2024-10-01 15:19:29.063105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:30.546 [2024-10-01 15:19:29.063114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:30.546 [2024-10-01 15:19:29.063125] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:30.546 [2024-10-01 15:19:29.063137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.546 [2024-10-01 15:19:29.063147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:30.546 [2024-10-01 15:19:29.063157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:30.546 [2024-10-01 15:19:29.063168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:30.546 [2024-10-01 15:19:29.063187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:30.546 [2024-10-01 15:19:29.063197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:30.546 [2024-10-01 15:19:29.063207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:30.546 [2024-10-01 15:19:29.063221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:30.546 [2024-10-01 15:19:29.063231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:30.546 [2024-10-01 15:19:29.063241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:30.546 [2024-10-01 15:19:29.063251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:30.546 [2024-10-01 15:19:29.063262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:30.546 [2024-10-01 15:19:29.063272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:30.546 [2024-10-01 15:19:29.063282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:30.546 [2024-10-01 15:19:29.063292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:30.546 [2024-10-01 15:19:29.063302] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:30.546 [2024-10-01 15:19:29.063314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.546 [2024-10-01 15:19:29.063325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:30.546 [2024-10-01 15:19:29.063335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:30.546 [2024-10-01 15:19:29.063345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:30.546 [2024-10-01 15:19:29.063359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:30.546 [2024-10-01 15:19:29.063371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.546 [2024-10-01 15:19:29.063381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:30.546 [2024-10-01 15:19:29.063402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:19:30.546 [2024-10-01 15:19:29.063412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-10-01 15:19:29.085486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.546 [2024-10-01 15:19:29.085549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.546 [2024-10-01 15:19:29.085573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.041 ms 00:19:30.546 [2024-10-01 15:19:29.085599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-10-01 15:19:29.085715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.546 [2024-10-01 15:19:29.085730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.546 [2024-10-01 15:19:29.085754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:30.546 [2024-10-01 15:19:29.085767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.097363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.097411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.805 [2024-10-01 15:19:29.097428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.526 ms 00:19:30.805 [2024-10-01 15:19:29.097441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.097496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.097509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.805 [2024-10-01 15:19:29.097522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:30.805 [2024-10-01 15:19:29.097535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.098036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.098065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.805 [2024-10-01 15:19:29.098079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:19:30.805 [2024-10-01 15:19:29.098090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.098254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.098270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.805 [2024-10-01 15:19:29.098283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:30.805 [2024-10-01 15:19:29.098304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.104400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.104440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.805 [2024-10-01 15:19:29.104458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.080 ms 00:19:30.805 [2024-10-01 15:19:29.104469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.107149] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:30.805 [2024-10-01 15:19:29.107209] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:30.805 [2024-10-01 15:19:29.107228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.107239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:30.805 [2024-10-01 15:19:29.107250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.656 ms 00:19:30.805 [2024-10-01 15:19:29.107260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.121165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.121217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:30.805 [2024-10-01 15:19:29.121239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.881 ms 00:19:30.805 [2024-10-01 15:19:29.121253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.123573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.123600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:30.805 [2024-10-01 15:19:29.123613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:19:30.805 [2024-10-01 15:19:29.123623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.125040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.125070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:30.805 [2024-10-01 15:19:29.125081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:19:30.805 [2024-10-01 15:19:29.125090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.125429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.125446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.805 [2024-10-01 15:19:29.125458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:30.805 [2024-10-01 15:19:29.125467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.145501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.145566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:30.805 [2024-10-01 15:19:29.145590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.043 ms 00:19:30.805 [2024-10-01 15:19:29.145604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.152207] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:30.805 [2024-10-01 15:19:29.155492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.155523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.805 [2024-10-01 15:19:29.155537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.841 ms 00:19:30.805 [2024-10-01 15:19:29.155548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.155662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.155678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:30.805 [2024-10-01 15:19:29.155690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:30.805 [2024-10-01 15:19:29.155716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.805 [2024-10-01 15:19:29.155805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.805 [2024-10-01 15:19:29.155835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.806 [2024-10-01 15:19:29.155847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:30.806 [2024-10-01 15:19:29.155857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-10-01 15:19:29.155888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.806 [2024-10-01 15:19:29.155908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.806 [2024-10-01 15:19:29.155919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.806 [2024-10-01 15:19:29.155929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-10-01 15:19:29.155967] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:30.806 [2024-10-01 15:19:29.155987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.806 [2024-10-01 15:19:29.155999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:30.806 [2024-10-01 15:19:29.156012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:30.806 [2024-10-01 15:19:29.156023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-10-01 15:19:29.159599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.806 [2024-10-01 15:19:29.159645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.806 [2024-10-01 15:19:29.159658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.562 ms 00:19:30.806 [2024-10-01 15:19:29.159677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-10-01 15:19:29.159771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.806 [2024-10-01 15:19:29.159784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.806 [2024-10-01 15:19:29.159796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:30.806 [2024-10-01 15:19:29.159806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-10-01 15:19:29.160991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.649 ms, result 0 00:20:07.134  Copying: 29/1024 [MB] (29 MBps) Copying: 58/1024 [MB] (28 MBps) Copying: 85/1024 [MB] (27 MBps) Copying: 115/1024 [MB] (29 MBps) Copying: 145/1024 [MB] (30 MBps) Copying: 173/1024 [MB] (28 MBps) Copying: 202/1024 [MB] (28 MBps) Copying: 231/1024 [MB] (29 MBps) Copying: 263/1024 [MB] (32 MBps) Copying: 295/1024 [MB] (31 MBps) Copying: 326/1024 [MB] (31 MBps) Copying: 355/1024 [MB] (29 MBps) Copying: 383/1024 [MB] (28 MBps) Copying: 411/1024 [MB] (27 MBps) Copying: 438/1024 [MB] (27 MBps) Copying: 466/1024 [MB] (27 MBps) Copying: 494/1024 [MB] (27 MBps) Copying: 522/1024 [MB] (27 MBps) Copying: 549/1024 [MB] (27 MBps) Copying: 577/1024 [MB] (27 MBps) Copying: 605/1024 [MB] (27 MBps) Copying: 632/1024 [MB] (27 MBps) Copying: 659/1024 [MB] (26 MBps) Copying: 687/1024 [MB] (28 MBps) Copying: 715/1024 [MB] (27 MBps) Copying: 742/1024 [MB] (27 MBps) Copying: 769/1024 [MB] (26 MBps) Copying: 795/1024 [MB] (26 MBps) Copying: 822/1024 [MB] (26 MBps) Copying: 850/1024 [MB] (27 MBps) Copying: 878/1024 [MB] (28 MBps) Copying: 906/1024 [MB] (28 MBps) Copying: 934/1024 [MB] (27 MBps) Copying: 960/1024 [MB] (25 MBps) Copying: 986/1024 [MB] (26 MBps) Copying: 1014/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-10-01 15:20:05.454349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.134 [2024-10-01 15:20:05.454413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:07.134 [2024-10-01 15:20:05.454431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:07.134 [2024-10-01 15:20:05.454442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.134 [2024-10-01 15:20:05.454465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:07.134 [2024-10-01 15:20:05.455145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.134 [2024-10-01 15:20:05.455165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:07.134 [2024-10-01 15:20:05.455188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:20:07.134 [2024-10-01 15:20:05.455198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.134 [2024-10-01 15:20:05.456945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.134 [2024-10-01 15:20:05.456988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:07.134 [2024-10-01 15:20:05.457013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.729 ms 00:20:07.134 [2024-10-01 15:20:05.457024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.134 [2024-10-01 15:20:05.474240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.134 [2024-10-01 15:20:05.474287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:07.135 [2024-10-01 15:20:05.474315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.222 ms 00:20:07.135 [2024-10-01 15:20:05.474325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.135 [2024-10-01 15:20:05.479339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.135 [2024-10-01 15:20:05.479377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:07.135 [2024-10-01 15:20:05.479389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.984 ms 00:20:07.135 [2024-10-01 15:20:05.479399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.135 [2024-10-01 15:20:05.481335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.135 [2024-10-01 15:20:05.481390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:07.135 [2024-10-01 15:20:05.481403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:20:07.135 [2024-10-01 15:20:05.481413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.135 [2024-10-01 15:20:05.485164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.135 [2024-10-01 15:20:05.485219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:07.135 [2024-10-01 15:20:05.485232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:20:07.135 [2024-10-01 15:20:05.485252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.135 [2024-10-01 15:20:05.485361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.135 [2024-10-01 15:20:05.485374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:07.135 [2024-10-01 15:20:05.485392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:07.135 [2024-10-01 15:20:05.485401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.135 [2024-10-01 15:20:05.487429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.135 [2024-10-01 15:20:05.487466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:07.135 [2024-10-01 15:20:05.487478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.014 ms 00:20:07.135 [2024-10-01 15:20:05.487487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.135 [2024-10-01 15:20:05.489145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.135 [2024-10-01 15:20:05.489205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:07.135 [2024-10-01 15:20:05.489217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:20:07.135 [2024-10-01 15:20:05.489227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.135 [2024-10-01 15:20:05.490339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.135 [2024-10-01 15:20:05.490374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:07.135 [2024-10-01 15:20:05.490385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:20:07.135 [2024-10-01 15:20:05.490395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.135 [2024-10-01 15:20:05.491515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.135 [2024-10-01 15:20:05.491550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:07.135 [2024-10-01 15:20:05.491561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:20:07.135 [2024-10-01 15:20:05.491570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.135 [2024-10-01 15:20:05.491595] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:07.135 [2024-10-01 15:20:05.491613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.491989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:07.135 [2024-10-01 15:20:05.492104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:07.136 [2024-10-01 15:20:05.492699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:07.136 [2024-10-01 15:20:05.492713] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a75a8b7f-e542-4dc7-afc8-62fa7d0b6a59 00:20:07.136 [2024-10-01 15:20:05.492724] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:07.136 [2024-10-01 15:20:05.492734] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:07.136 [2024-10-01 15:20:05.492743] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:07.136 [2024-10-01 15:20:05.492763] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:07.136 [2024-10-01 15:20:05.492773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:07.136 [2024-10-01 15:20:05.492784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:07.136 [2024-10-01 15:20:05.492794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:07.136 [2024-10-01 15:20:05.492803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:07.136 [2024-10-01 15:20:05.492812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:07.136 [2024-10-01 15:20:05.492821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.136 [2024-10-01 15:20:05.492832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:07.136 [2024-10-01 15:20:05.492842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:20:07.136 [2024-10-01 15:20:05.492867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.136 [2024-10-01 15:20:05.494598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.136 [2024-10-01 15:20:05.494634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:07.136 [2024-10-01 15:20:05.494645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:20:07.136 [2024-10-01 15:20:05.494655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.136 [2024-10-01 15:20:05.494758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.136 [2024-10-01 15:20:05.494769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:07.136 [2024-10-01 15:20:05.494791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:07.136 [2024-10-01 15:20:05.494807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.136 [2024-10-01 15:20:05.500924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.136 [2024-10-01 15:20:05.500964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:07.136 [2024-10-01 15:20:05.500976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.136 [2024-10-01 15:20:05.500989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.136 [2024-10-01 15:20:05.501045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.501056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:07.137 [2024-10-01 15:20:05.501073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.501083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.501129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.501143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:07.137 [2024-10-01 15:20:05.501153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.501162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.501191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.501201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:07.137 [2024-10-01 15:20:05.501212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.501226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.514546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.514597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:07.137 [2024-10-01 15:20:05.514622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.514632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.522920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.522965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.137 [2024-10-01 15:20:05.522979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.522997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.523054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.523065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:07.137 [2024-10-01 15:20:05.523076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.523086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.523110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.523121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:07.137 [2024-10-01 15:20:05.523141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.523151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.523243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.523257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:07.137 [2024-10-01 15:20:05.523268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.523278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.523315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.523327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:07.137 [2024-10-01 15:20:05.523346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.523356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.523397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.523409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:07.137 [2024-10-01 15:20:05.523420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.523429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.523471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.137 [2024-10-01 15:20:05.523483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:07.137 [2024-10-01 15:20:05.523494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.137 [2024-10-01 15:20:05.523503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.137 [2024-10-01 15:20:05.523640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.357 ms, result 0 00:20:07.705 00:20:07.705 00:20:07.705 15:20:06 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:07.963 [2024-10-01 15:20:06.276428] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:20:07.964 [2024-10-01 15:20:06.276574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87962 ] 00:20:07.964 [2024-10-01 15:20:06.445198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.964 [2024-10-01 15:20:06.495856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.223 [2024-10-01 15:20:06.600816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:08.223 [2024-10-01 15:20:06.600905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:08.223 [2024-10-01 15:20:06.760818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.223 [2024-10-01 15:20:06.760893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:08.223 [2024-10-01 15:20:06.760912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:08.223 [2024-10-01 15:20:06.760932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.223 [2024-10-01 15:20:06.760993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.223 [2024-10-01 15:20:06.761006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.223 [2024-10-01 15:20:06.761024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:08.223 [2024-10-01 15:20:06.761034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.223 [2024-10-01 15:20:06.761057] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:08.223 [2024-10-01 15:20:06.761368] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:08.223 [2024-10-01 15:20:06.761394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.223 [2024-10-01 15:20:06.761413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.223 [2024-10-01 15:20:06.761434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:20:08.223 [2024-10-01 15:20:06.761444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.223 [2024-10-01 15:20:06.762933] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:08.223 [2024-10-01 15:20:06.765952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.223 [2024-10-01 15:20:06.766002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:08.223 [2024-10-01 15:20:06.766016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:20:08.223 [2024-10-01 15:20:06.766027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.223 [2024-10-01 15:20:06.766096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.223 [2024-10-01 15:20:06.766112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:08.223 [2024-10-01 15:20:06.766124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:08.223 [2024-10-01 15:20:06.766137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.483 [2024-10-01 15:20:06.773218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.483 [2024-10-01 15:20:06.773259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.483 [2024-10-01 15:20:06.773273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.882 ms 00:20:08.483 [2024-10-01 15:20:06.773284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.483 [2024-10-01 15:20:06.773403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.483 [2024-10-01 15:20:06.773421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.483 [2024-10-01 15:20:06.773439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:08.483 [2024-10-01 15:20:06.773449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.483 [2024-10-01 15:20:06.773518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.483 [2024-10-01 15:20:06.773537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:08.483 [2024-10-01 15:20:06.773555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:08.483 [2024-10-01 15:20:06.773572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.483 [2024-10-01 15:20:06.773599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:08.483 [2024-10-01 15:20:06.775294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.483 [2024-10-01 15:20:06.775324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.483 [2024-10-01 15:20:06.775337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:20:08.483 [2024-10-01 15:20:06.775347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.483 [2024-10-01 15:20:06.775378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.483 [2024-10-01 15:20:06.775390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:08.483 [2024-10-01 15:20:06.775411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:08.483 [2024-10-01 15:20:06.775421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.483 [2024-10-01 15:20:06.775444] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:08.483 [2024-10-01 15:20:06.775472] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:08.483 [2024-10-01 15:20:06.775525] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:08.483 [2024-10-01 15:20:06.775544] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:08.483 [2024-10-01 15:20:06.775642] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:08.483 [2024-10-01 15:20:06.775655] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:08.483 [2024-10-01 15:20:06.775668] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:08.483 [2024-10-01 15:20:06.775689] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:08.483 [2024-10-01 15:20:06.775705] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:08.483 [2024-10-01 15:20:06.775717] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:08.483 [2024-10-01 15:20:06.775731] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:08.483 [2024-10-01 15:20:06.775748] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:08.483 [2024-10-01 15:20:06.775759] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:08.483 [2024-10-01 15:20:06.775775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.483 [2024-10-01 15:20:06.775786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:08.483 [2024-10-01 15:20:06.775796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:20:08.483 [2024-10-01 15:20:06.775809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.483 [2024-10-01 15:20:06.775881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.483 [2024-10-01 15:20:06.775901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:08.483 [2024-10-01 15:20:06.775915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:08.483 [2024-10-01 15:20:06.775929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.483 [2024-10-01 15:20:06.776032] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:08.483 [2024-10-01 15:20:06.776048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:08.483 [2024-10-01 15:20:06.776066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:08.483 [2024-10-01 15:20:06.776090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.483 [2024-10-01 15:20:06.776108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:08.483 [2024-10-01 15:20:06.776118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:08.483 [2024-10-01 15:20:06.776127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:08.483 [2024-10-01 15:20:06.776137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:08.483 [2024-10-01 15:20:06.776147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:08.483 [2024-10-01 15:20:06.776156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:08.484 [2024-10-01 15:20:06.776166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:08.484 [2024-10-01 15:20:06.776193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:08.484 [2024-10-01 15:20:06.776202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:08.484 [2024-10-01 15:20:06.776215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:08.484 [2024-10-01 15:20:06.776224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:08.484 [2024-10-01 15:20:06.776234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.484 [2024-10-01 15:20:06.776243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:08.484 [2024-10-01 15:20:06.776253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:08.484 [2024-10-01 15:20:06.776262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.484 [2024-10-01 15:20:06.776272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:08.484 [2024-10-01 15:20:06.776281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:08.484 [2024-10-01 15:20:06.776298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.484 [2024-10-01 15:20:06.776307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:08.484 [2024-10-01 15:20:06.776317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:08.484 [2024-10-01 15:20:06.776326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.484 [2024-10-01 15:20:06.776334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:08.484 [2024-10-01 15:20:06.776343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:08.484 [2024-10-01 15:20:06.776352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.484 [2024-10-01 15:20:06.776361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:08.484 [2024-10-01 15:20:06.776379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:08.484 [2024-10-01 15:20:06.776388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.484 [2024-10-01 15:20:06.776397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:08.484 [2024-10-01 15:20:06.776406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:08.484 [2024-10-01 15:20:06.776415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:08.484 [2024-10-01 15:20:06.776424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:08.484 [2024-10-01 15:20:06.776433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:08.484 [2024-10-01 15:20:06.776442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:08.484 [2024-10-01 15:20:06.776451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:08.484 [2024-10-01 15:20:06.776460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:08.484 [2024-10-01 15:20:06.776469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.484 [2024-10-01 15:20:06.776478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:08.484 [2024-10-01 15:20:06.776487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:08.484 [2024-10-01 15:20:06.776496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.484 [2024-10-01 15:20:06.776506] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:08.484 [2024-10-01 15:20:06.776523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:08.484 [2024-10-01 15:20:06.776543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:08.484 [2024-10-01 15:20:06.776567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.484 [2024-10-01 15:20:06.776577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:08.484 [2024-10-01 15:20:06.776587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:08.484 [2024-10-01 15:20:06.776596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:08.484 [2024-10-01 15:20:06.776605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:08.484 [2024-10-01 15:20:06.776614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:08.484 [2024-10-01 15:20:06.776623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:08.484 [2024-10-01 15:20:06.776634] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:08.484 [2024-10-01 15:20:06.776647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:08.484 [2024-10-01 15:20:06.776658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:08.484 [2024-10-01 15:20:06.776668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:08.484 [2024-10-01 15:20:06.776678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:08.484 [2024-10-01 15:20:06.776689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:08.484 [2024-10-01 15:20:06.776699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:08.484 [2024-10-01 15:20:06.776710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:08.484 [2024-10-01 15:20:06.776723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:08.484 [2024-10-01 15:20:06.776734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:08.484 [2024-10-01 15:20:06.776745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:08.484 [2024-10-01 15:20:06.776755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:08.484 [2024-10-01 15:20:06.776766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:08.484 [2024-10-01 15:20:06.776776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:08.484 [2024-10-01 15:20:06.776786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:08.484 [2024-10-01 15:20:06.776796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:08.484 [2024-10-01 15:20:06.776806] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:08.484 [2024-10-01 15:20:06.776817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:08.484 [2024-10-01 15:20:06.776828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:08.484 [2024-10-01 15:20:06.776838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:08.484 [2024-10-01 15:20:06.776848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:08.484 [2024-10-01 15:20:06.776859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:08.484 [2024-10-01 15:20:06.776870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.484 [2024-10-01 15:20:06.776882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:08.484 [2024-10-01 15:20:06.776895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:20:08.484 [2024-10-01 15:20:06.776912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.484 [2024-10-01 15:20:06.801560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.484 [2024-10-01 15:20:06.801635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.484 [2024-10-01 15:20:06.801660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.613 ms 00:20:08.484 [2024-10-01 15:20:06.801674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.484 [2024-10-01 15:20:06.801797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.484 [2024-10-01 15:20:06.801829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:08.484 [2024-10-01 15:20:06.801844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:08.484 [2024-10-01 15:20:06.801856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.484 [2024-10-01 15:20:06.813027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.484 [2024-10-01 15:20:06.813276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.484 [2024-10-01 15:20:06.813305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.089 ms 00:20:08.484 [2024-10-01 15:20:06.813316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.484 [2024-10-01 15:20:06.813379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.484 [2024-10-01 15:20:06.813391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.484 [2024-10-01 15:20:06.813402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:08.484 [2024-10-01 15:20:06.813413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.484 [2024-10-01 15:20:06.813927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.484 [2024-10-01 15:20:06.813945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.484 [2024-10-01 15:20:06.813956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:20:08.484 [2024-10-01 15:20:06.813966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.484 [2024-10-01 15:20:06.814090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.484 [2024-10-01 15:20:06.814103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.484 [2024-10-01 15:20:06.814114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:08.484 [2024-10-01 15:20:06.814124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.484 [2024-10-01 15:20:06.820258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.484 [2024-10-01 15:20:06.820445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.484 [2024-10-01 15:20:06.820475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:20:08.484 [2024-10-01 15:20:06.820494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.484 [2024-10-01 15:20:06.823166] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:08.484 [2024-10-01 15:20:06.823234] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:08.485 [2024-10-01 15:20:06.823253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.823264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:08.485 [2024-10-01 15:20:06.823275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:20:08.485 [2024-10-01 15:20:06.823285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.837017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.837091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:08.485 [2024-10-01 15:20:06.837124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.707 ms 00:20:08.485 [2024-10-01 15:20:06.837135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.839832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.839875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:08.485 [2024-10-01 15:20:06.839888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:20:08.485 [2024-10-01 15:20:06.839899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.841578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.841611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:08.485 [2024-10-01 15:20:06.841624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:20:08.485 [2024-10-01 15:20:06.841634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.841971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.842004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:08.485 [2024-10-01 15:20:06.842024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:20:08.485 [2024-10-01 15:20:06.842035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.864197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.864299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:08.485 [2024-10-01 15:20:06.864317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.152 ms 00:20:08.485 [2024-10-01 15:20:06.864328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.871677] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:08.485 [2024-10-01 15:20:06.875300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.875353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:08.485 [2024-10-01 15:20:06.875373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.920 ms 00:20:08.485 [2024-10-01 15:20:06.875384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.875513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.875527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:08.485 [2024-10-01 15:20:06.875538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:08.485 [2024-10-01 15:20:06.875555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.875646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.875659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:08.485 [2024-10-01 15:20:06.875669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:08.485 [2024-10-01 15:20:06.875684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.875708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.875727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:08.485 [2024-10-01 15:20:06.875737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:08.485 [2024-10-01 15:20:06.875747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.875784] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:08.485 [2024-10-01 15:20:06.875795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.875809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:08.485 [2024-10-01 15:20:06.875819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:08.485 [2024-10-01 15:20:06.875829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.879543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.879585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:08.485 [2024-10-01 15:20:06.879608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:20:08.485 [2024-10-01 15:20:06.879618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.879701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.485 [2024-10-01 15:20:06.879714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:08.485 [2024-10-01 15:20:06.879725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:08.485 [2024-10-01 15:20:06.879735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.485 [2024-10-01 15:20:06.880832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.801 ms, result 0 00:20:45.131  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 81/1024 [MB] (28 MBps) Copying: 107/1024 [MB] (26 MBps) Copying: 133/1024 [MB] (25 MBps) Copying: 159/1024 [MB] (26 MBps) Copying: 184/1024 [MB] (25 MBps) Copying: 210/1024 [MB] (26 MBps) Copying: 239/1024 [MB] (28 MBps) Copying: 265/1024 [MB] (26 MBps) Copying: 294/1024 [MB] (28 MBps) Copying: 322/1024 [MB] (28 MBps) Copying: 351/1024 [MB] (28 MBps) Copying: 379/1024 [MB] (28 MBps) Copying: 408/1024 [MB] (28 MBps) Copying: 438/1024 [MB] (30 MBps) Copying: 467/1024 [MB] (29 MBps) Copying: 495/1024 [MB] (28 MBps) Copying: 523/1024 [MB] (28 MBps) Copying: 551/1024 [MB] (27 MBps) Copying: 579/1024 [MB] (27 MBps) Copying: 608/1024 [MB] (28 MBps) Copying: 636/1024 [MB] (28 MBps) Copying: 664/1024 [MB] (27 MBps) Copying: 691/1024 [MB] (27 MBps) Copying: 720/1024 [MB] (28 MBps) Copying: 749/1024 [MB] (28 MBps) Copying: 777/1024 [MB] (28 MBps) Copying: 805/1024 [MB] (27 MBps) Copying: 833/1024 [MB] (28 MBps) Copying: 862/1024 [MB] (29 MBps) Copying: 890/1024 [MB] (27 MBps) Copying: 919/1024 [MB] (29 MBps) Copying: 949/1024 [MB] (29 MBps) Copying: 981/1024 [MB] (32 MBps) Copying: 1013/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-10-01 15:20:43.660275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.131 [2024-10-01 15:20:43.661068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:45.131 [2024-10-01 15:20:43.661121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:45.131 [2024-10-01 15:20:43.661148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.131 [2024-10-01 15:20:43.661255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:45.131 [2024-10-01 15:20:43.662413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.131 [2024-10-01 15:20:43.662462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:45.131 [2024-10-01 15:20:43.662502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:20:45.131 [2024-10-01 15:20:43.662524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.131 [2024-10-01 15:20:43.662988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.131 [2024-10-01 15:20:43.663045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:45.131 [2024-10-01 15:20:43.663069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:20:45.131 [2024-10-01 15:20:43.663091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.131 [2024-10-01 15:20:43.669463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.131 [2024-10-01 15:20:43.669530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:45.131 [2024-10-01 15:20:43.669556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.350 ms 00:20:45.131 [2024-10-01 15:20:43.669577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-10-01 15:20:43.678223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-10-01 15:20:43.678269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:45.392 [2024-10-01 15:20:43.678288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.620 ms 00:20:45.392 [2024-10-01 15:20:43.678304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-10-01 15:20:43.680022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-10-01 15:20:43.680243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:45.392 [2024-10-01 15:20:43.680273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:20:45.392 [2024-10-01 15:20:43.680291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-10-01 15:20:43.684357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-10-01 15:20:43.684397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:45.392 [2024-10-01 15:20:43.684423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.021 ms 00:20:45.392 [2024-10-01 15:20:43.684434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-10-01 15:20:43.684540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-10-01 15:20:43.684554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:45.392 [2024-10-01 15:20:43.684566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:45.392 [2024-10-01 15:20:43.684576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-10-01 15:20:43.686600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-10-01 15:20:43.686637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:45.392 [2024-10-01 15:20:43.686649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:20:45.392 [2024-10-01 15:20:43.686660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-10-01 15:20:43.688044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-10-01 15:20:43.688208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:45.392 [2024-10-01 15:20:43.688229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.341 ms 00:20:45.392 [2024-10-01 15:20:43.688241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-10-01 15:20:43.689327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-10-01 15:20:43.689361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:45.392 [2024-10-01 15:20:43.689373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:20:45.392 [2024-10-01 15:20:43.689383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-10-01 15:20:43.690368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.392 [2024-10-01 15:20:43.690400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:45.392 [2024-10-01 15:20:43.690412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:20:45.392 [2024-10-01 15:20:43.690422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.392 [2024-10-01 15:20:43.690449] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:45.392 [2024-10-01 15:20:43.690473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:45.392 [2024-10-01 15:20:43.690488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:45.392 [2024-10-01 15:20:43.690500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:45.392 [2024-10-01 15:20:43.690512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:45.392 [2024-10-01 15:20:43.690523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:45.392 [2024-10-01 15:20:43.690534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:45.392 [2024-10-01 15:20:43.690545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:45.392 [2024-10-01 15:20:43.690557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:45.392 [2024-10-01 15:20:43.690568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:45.392 [2024-10-01 15:20:43.690580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.690991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:45.393 [2024-10-01 15:20:43.691638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:45.394 [2024-10-01 15:20:43.691649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:45.394 [2024-10-01 15:20:43.691660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:45.394 [2024-10-01 15:20:43.691680] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:45.394 [2024-10-01 15:20:43.691690] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a75a8b7f-e542-4dc7-afc8-62fa7d0b6a59 00:20:45.394 [2024-10-01 15:20:43.691727] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:45.394 [2024-10-01 15:20:43.691737] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:45.394 [2024-10-01 15:20:43.691748] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:45.394 [2024-10-01 15:20:43.691758] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:45.394 [2024-10-01 15:20:43.691769] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:45.394 [2024-10-01 15:20:43.691779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:45.394 [2024-10-01 15:20:43.691790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:45.394 [2024-10-01 15:20:43.691799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:45.394 [2024-10-01 15:20:43.691809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:45.394 [2024-10-01 15:20:43.691818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-10-01 15:20:43.691828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:45.394 [2024-10-01 15:20:43.691844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.372 ms 00:20:45.394 [2024-10-01 15:20:43.691858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.693607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-10-01 15:20:43.693630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:45.394 [2024-10-01 15:20:43.693642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:20:45.394 [2024-10-01 15:20:43.693652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.693767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.394 [2024-10-01 15:20:43.693783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:45.394 [2024-10-01 15:20:43.693795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:45.394 [2024-10-01 15:20:43.693805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.700086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.700221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.394 [2024-10-01 15:20:43.700333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.700372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.700460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.700553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.394 [2024-10-01 15:20:43.700590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.700620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.700785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.700892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.394 [2024-10-01 15:20:43.700955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.700990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.701033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.701106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.394 [2024-10-01 15:20:43.701149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.701197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.715028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.715251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.394 [2024-10-01 15:20:43.715337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.715374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.723664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.723833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.394 [2024-10-01 15:20:43.723925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.723963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.724044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.724103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.394 [2024-10-01 15:20:43.724230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.724289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.724391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.724430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.394 [2024-10-01 15:20:43.724462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.724541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.724650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.724720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.394 [2024-10-01 15:20:43.724864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.724879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.724938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.724950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:45.394 [2024-10-01 15:20:43.724960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.724970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.725015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.725033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.394 [2024-10-01 15:20:43.725050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.725060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.725112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.394 [2024-10-01 15:20:43.725125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.394 [2024-10-01 15:20:43.725135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.394 [2024-10-01 15:20:43.725145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.394 [2024-10-01 15:20:43.725284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.104 ms, result 0 00:20:45.654 00:20:45.654 00:20:45.654 15:20:43 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:47.565 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:47.565 15:20:45 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:47.565 [2024-10-01 15:20:45.877718] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:20:47.565 [2024-10-01 15:20:45.877855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88365 ] 00:20:47.565 [2024-10-01 15:20:46.047081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.565 [2024-10-01 15:20:46.093894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.825 [2024-10-01 15:20:46.197201] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:47.825 [2024-10-01 15:20:46.197451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:47.825 [2024-10-01 15:20:46.355993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.825 [2024-10-01 15:20:46.356237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:47.825 [2024-10-01 15:20:46.356270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:47.825 [2024-10-01 15:20:46.356282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.825 [2024-10-01 15:20:46.356347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.825 [2024-10-01 15:20:46.356359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:47.825 [2024-10-01 15:20:46.356375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:47.825 [2024-10-01 15:20:46.356394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.825 [2024-10-01 15:20:46.356441] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:47.825 [2024-10-01 15:20:46.356746] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:47.825 [2024-10-01 15:20:46.356777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.825 [2024-10-01 15:20:46.356795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:47.825 [2024-10-01 15:20:46.356818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:20:47.825 [2024-10-01 15:20:46.356835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.825 [2024-10-01 15:20:46.358375] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:47.825 [2024-10-01 15:20:46.360886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.825 [2024-10-01 15:20:46.360924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:47.825 [2024-10-01 15:20:46.360940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.516 ms 00:20:47.825 [2024-10-01 15:20:46.360953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.825 [2024-10-01 15:20:46.361020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.825 [2024-10-01 15:20:46.361038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:47.825 [2024-10-01 15:20:46.361052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:47.825 [2024-10-01 15:20:46.361068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.825 [2024-10-01 15:20:46.367740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.825 [2024-10-01 15:20:46.367891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:47.825 [2024-10-01 15:20:46.367913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.615 ms 00:20:47.825 [2024-10-01 15:20:46.367926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.825 [2024-10-01 15:20:46.368060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.825 [2024-10-01 15:20:46.368077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:47.825 [2024-10-01 15:20:46.368091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:47.825 [2024-10-01 15:20:46.368105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.825 [2024-10-01 15:20:46.368178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.825 [2024-10-01 15:20:46.368241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:47.825 [2024-10-01 15:20:46.368256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:47.825 [2024-10-01 15:20:46.368269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.825 [2024-10-01 15:20:46.368302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:47.825 [2024-10-01 15:20:46.369932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.825 [2024-10-01 15:20:46.369963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:47.825 [2024-10-01 15:20:46.369977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:20:47.825 [2024-10-01 15:20:46.369999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.825 [2024-10-01 15:20:46.370045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.825 [2024-10-01 15:20:46.370059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:47.825 [2024-10-01 15:20:46.370072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:47.825 [2024-10-01 15:20:46.370084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.825 [2024-10-01 15:20:46.370108] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:47.825 [2024-10-01 15:20:46.370138] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:47.825 [2024-10-01 15:20:46.370196] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:47.825 [2024-10-01 15:20:46.370225] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:47.826 [2024-10-01 15:20:46.370318] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:47.826 [2024-10-01 15:20:46.370334] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:47.826 [2024-10-01 15:20:46.370349] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:47.826 [2024-10-01 15:20:46.370365] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:47.826 [2024-10-01 15:20:46.370384] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:47.826 [2024-10-01 15:20:46.370399] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:47.826 [2024-10-01 15:20:46.370412] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:47.826 [2024-10-01 15:20:46.370424] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:47.826 [2024-10-01 15:20:46.370437] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:47.826 [2024-10-01 15:20:46.370450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.826 [2024-10-01 15:20:46.370470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:47.826 [2024-10-01 15:20:46.370483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:20:47.826 [2024-10-01 15:20:46.370496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.826 [2024-10-01 15:20:46.370571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.826 [2024-10-01 15:20:46.370587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:47.826 [2024-10-01 15:20:46.370609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:47.826 [2024-10-01 15:20:46.370622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.826 [2024-10-01 15:20:46.370713] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:47.826 [2024-10-01 15:20:46.370728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:47.826 [2024-10-01 15:20:46.370741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:47.826 [2024-10-01 15:20:46.370764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.826 [2024-10-01 15:20:46.370784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:47.826 [2024-10-01 15:20:46.370797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:47.826 [2024-10-01 15:20:46.370809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:47.826 [2024-10-01 15:20:46.370823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:47.826 [2024-10-01 15:20:46.370835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:47.826 [2024-10-01 15:20:46.370847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:47.826 [2024-10-01 15:20:46.370859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:47.826 [2024-10-01 15:20:46.370871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:47.826 [2024-10-01 15:20:46.370883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:47.826 [2024-10-01 15:20:46.370903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:47.826 [2024-10-01 15:20:46.370916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:47.826 [2024-10-01 15:20:46.370928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.826 [2024-10-01 15:20:46.370940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:47.826 [2024-10-01 15:20:46.370952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:47.826 [2024-10-01 15:20:46.370964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.826 [2024-10-01 15:20:46.370976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:47.826 [2024-10-01 15:20:46.370989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:47.826 [2024-10-01 15:20:46.371001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.826 [2024-10-01 15:20:46.371013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:47.826 [2024-10-01 15:20:46.371025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:47.826 [2024-10-01 15:20:46.371037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.826 [2024-10-01 15:20:46.371049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:47.826 [2024-10-01 15:20:46.371061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:47.826 [2024-10-01 15:20:46.371073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.826 [2024-10-01 15:20:46.371084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:47.826 [2024-10-01 15:20:46.371105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:47.826 [2024-10-01 15:20:46.371117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.826 [2024-10-01 15:20:46.371130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:47.826 [2024-10-01 15:20:46.371142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:47.826 [2024-10-01 15:20:46.371153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:47.826 [2024-10-01 15:20:46.371165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:47.826 [2024-10-01 15:20:46.371372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:47.826 [2024-10-01 15:20:46.371411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:47.826 [2024-10-01 15:20:46.371453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:47.826 [2024-10-01 15:20:46.371488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:47.826 [2024-10-01 15:20:46.371522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.826 [2024-10-01 15:20:46.371555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:47.826 [2024-10-01 15:20:46.371589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:47.826 [2024-10-01 15:20:46.371670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.826 [2024-10-01 15:20:46.371709] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:47.826 [2024-10-01 15:20:46.371745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:47.826 [2024-10-01 15:20:46.371788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:47.826 [2024-10-01 15:20:46.371826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.826 [2024-10-01 15:20:46.371861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:47.826 [2024-10-01 15:20:46.371895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:48.085 [2024-10-01 15:20:46.371930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:48.085 [2024-10-01 15:20:46.372103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:48.085 [2024-10-01 15:20:46.372137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:48.085 [2024-10-01 15:20:46.372185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:48.085 [2024-10-01 15:20:46.372231] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:48.085 [2024-10-01 15:20:46.372307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:48.085 [2024-10-01 15:20:46.372418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:48.085 [2024-10-01 15:20:46.372515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:48.085 [2024-10-01 15:20:46.372628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:48.085 [2024-10-01 15:20:46.372685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:48.085 [2024-10-01 15:20:46.372783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:48.085 [2024-10-01 15:20:46.372840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:48.085 [2024-10-01 15:20:46.372932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:48.085 [2024-10-01 15:20:46.373022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:48.085 [2024-10-01 15:20:46.373040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:48.086 [2024-10-01 15:20:46.373053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:48.086 [2024-10-01 15:20:46.373067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:48.086 [2024-10-01 15:20:46.373080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:48.086 [2024-10-01 15:20:46.373093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:48.086 [2024-10-01 15:20:46.373107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:48.086 [2024-10-01 15:20:46.373120] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:48.086 [2024-10-01 15:20:46.373135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:48.086 [2024-10-01 15:20:46.373149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:48.086 [2024-10-01 15:20:46.373163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:48.086 [2024-10-01 15:20:46.373187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:48.086 [2024-10-01 15:20:46.373201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:48.086 [2024-10-01 15:20:46.373216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.373229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:48.086 [2024-10-01 15:20:46.373248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:20:48.086 [2024-10-01 15:20:46.373262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.393217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.393280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.086 [2024-10-01 15:20:46.393308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.903 ms 00:20:48.086 [2024-10-01 15:20:46.393319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.393438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.393453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:48.086 [2024-10-01 15:20:46.393467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:48.086 [2024-10-01 15:20:46.393480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.405211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.405262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.086 [2024-10-01 15:20:46.405275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.665 ms 00:20:48.086 [2024-10-01 15:20:46.405286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.405331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.405342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.086 [2024-10-01 15:20:46.405353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:48.086 [2024-10-01 15:20:46.405363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.405831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.405848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.086 [2024-10-01 15:20:46.405867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:20:48.086 [2024-10-01 15:20:46.405877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.405991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.406003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.086 [2024-10-01 15:20:46.406021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:48.086 [2024-10-01 15:20:46.406031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.411952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.411992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.086 [2024-10-01 15:20:46.412016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.909 ms 00:20:48.086 [2024-10-01 15:20:46.412026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.414591] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:48.086 [2024-10-01 15:20:46.414626] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:48.086 [2024-10-01 15:20:46.414653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.414664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:48.086 [2024-10-01 15:20:46.414675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.523 ms 00:20:48.086 [2024-10-01 15:20:46.414684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.428137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.428304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:48.086 [2024-10-01 15:20:46.428337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.434 ms 00:20:48.086 [2024-10-01 15:20:46.428348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.430245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.430279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:48.086 [2024-10-01 15:20:46.430291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:20:48.086 [2024-10-01 15:20:46.430301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.431778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.431936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:48.086 [2024-10-01 15:20:46.431955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:20:48.086 [2024-10-01 15:20:46.431965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.432320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.432338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:48.086 [2024-10-01 15:20:46.432350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:48.086 [2024-10-01 15:20:46.432371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.452464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.452533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:48.086 [2024-10-01 15:20:46.452555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.097 ms 00:20:48.086 [2024-10-01 15:20:46.452566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.458880] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:48.086 [2024-10-01 15:20:46.462197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.462229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:48.086 [2024-10-01 15:20:46.462257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.594 ms 00:20:48.086 [2024-10-01 15:20:46.462271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.462371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.462388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:48.086 [2024-10-01 15:20:46.462400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:48.086 [2024-10-01 15:20:46.462410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.462497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.462510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:48.086 [2024-10-01 15:20:46.462521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:48.086 [2024-10-01 15:20:46.462535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.462556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.462566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:48.086 [2024-10-01 15:20:46.462576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:48.086 [2024-10-01 15:20:46.462586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.462621] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:48.086 [2024-10-01 15:20:46.462633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.462643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:48.086 [2024-10-01 15:20:46.462656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:48.086 [2024-10-01 15:20:46.462666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.466252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.466289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:48.086 [2024-10-01 15:20:46.466302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.571 ms 00:20:48.086 [2024-10-01 15:20:46.466312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.466386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.086 [2024-10-01 15:20:46.466399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:48.086 [2024-10-01 15:20:46.466410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:48.086 [2024-10-01 15:20:46.466420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.086 [2024-10-01 15:20:46.467559] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.271 ms, result 0 00:21:22.277  Copying: 26/1024 [MB] (26 MBps) Copying: 55/1024 [MB] (29 MBps) Copying: 85/1024 [MB] (29 MBps) Copying: 114/1024 [MB] (28 MBps) Copying: 143/1024 [MB] (28 MBps) Copying: 171/1024 [MB] (28 MBps) Copying: 200/1024 [MB] (28 MBps) Copying: 229/1024 [MB] (29 MBps) Copying: 262/1024 [MB] (33 MBps) Copying: 292/1024 [MB] (30 MBps) Copying: 325/1024 [MB] (32 MBps) Copying: 355/1024 [MB] (29 MBps) Copying: 383/1024 [MB] (28 MBps) Copying: 412/1024 [MB] (28 MBps) Copying: 440/1024 [MB] (28 MBps) Copying: 471/1024 [MB] (30 MBps) Copying: 501/1024 [MB] (30 MBps) Copying: 533/1024 [MB] (31 MBps) Copying: 565/1024 [MB] (32 MBps) Copying: 600/1024 [MB] (34 MBps) Copying: 633/1024 [MB] (33 MBps) Copying: 667/1024 [MB] (33 MBps) Copying: 705/1024 [MB] (38 MBps) Copying: 735/1024 [MB] (29 MBps) Copying: 768/1024 [MB] (33 MBps) Copying: 800/1024 [MB] (31 MBps) Copying: 831/1024 [MB] (30 MBps) Copying: 862/1024 [MB] (31 MBps) Copying: 892/1024 [MB] (29 MBps) Copying: 922/1024 [MB] (29 MBps) Copying: 950/1024 [MB] (28 MBps) Copying: 983/1024 [MB] (32 MBps) Copying: 1015/1024 [MB] (32 MBps) Copying: 1048448/1048576 [kB] (8324 kBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-10-01 15:21:20.550876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.277 [2024-10-01 15:21:20.550959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.277 [2024-10-01 15:21:20.550977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:22.277 [2024-10-01 15:21:20.550989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.277 [2024-10-01 15:21:20.553831] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.277 [2024-10-01 15:21:20.557151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.277 [2024-10-01 15:21:20.557202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.277 [2024-10-01 15:21:20.557217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.254 ms 00:21:22.277 [2024-10-01 15:21:20.557228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.566860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-10-01 15:21:20.566903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.278 [2024-10-01 15:21:20.566929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.740 ms 00:21:22.278 [2024-10-01 15:21:20.566940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.589476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-10-01 15:21:20.589659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.278 [2024-10-01 15:21:20.589684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.553 ms 00:21:22.278 [2024-10-01 15:21:20.589696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.595057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-10-01 15:21:20.595099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.278 [2024-10-01 15:21:20.595112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.331 ms 00:21:22.278 [2024-10-01 15:21:20.595123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.596836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-10-01 15:21:20.596976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.278 [2024-10-01 15:21:20.596996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:21:22.278 [2024-10-01 15:21:20.597007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.600380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-10-01 15:21:20.600472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.278 [2024-10-01 15:21:20.600510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.342 ms 00:21:22.278 [2024-10-01 15:21:20.600592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.681012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-10-01 15:21:20.681301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.278 [2024-10-01 15:21:20.681404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.480 ms 00:21:22.278 [2024-10-01 15:21:20.681442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.683836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-10-01 15:21:20.683999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:22.278 [2024-10-01 15:21:20.684086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.351 ms 00:21:22.278 [2024-10-01 15:21:20.684120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.685523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-10-01 15:21:20.685650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:22.278 [2024-10-01 15:21:20.685722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:21:22.278 [2024-10-01 15:21:20.685756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.686821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-10-01 15:21:20.686942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:22.278 [2024-10-01 15:21:20.687009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:21:22.278 [2024-10-01 15:21:20.687042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.688069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.278 [2024-10-01 15:21:20.688201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:22.278 [2024-10-01 15:21:20.688271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:21:22.278 [2024-10-01 15:21:20.688305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.278 [2024-10-01 15:21:20.688354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:22.278 [2024-10-01 15:21:20.688454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116224 / 261120 wr_cnt: 1 state: open 00:21:22.278 [2024-10-01 15:21:20.688535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.688581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.688628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.688674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.688769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.688821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.688867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.688914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.689950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.690988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.691020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.691032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.691042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.691053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:22.278 [2024-10-01 15:21:20.691064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:22.279 [2024-10-01 15:21:20.691664] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:22.279 [2024-10-01 15:21:20.691675] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a75a8b7f-e542-4dc7-afc8-62fa7d0b6a59 00:21:22.279 [2024-10-01 15:21:20.691687] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116224 00:21:22.279 [2024-10-01 15:21:20.691697] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117184 00:21:22.279 [2024-10-01 15:21:20.691706] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116224 00:21:22.279 [2024-10-01 15:21:20.691734] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:21:22.279 [2024-10-01 15:21:20.691744] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:22.279 [2024-10-01 15:21:20.691754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:22.279 [2024-10-01 15:21:20.691764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:22.279 [2024-10-01 15:21:20.691773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:22.279 [2024-10-01 15:21:20.691782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:22.279 [2024-10-01 15:21:20.691792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-10-01 15:21:20.691803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:22.279 [2024-10-01 15:21:20.691813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.445 ms 00:21:22.279 [2024-10-01 15:21:20.691823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-10-01 15:21:20.693616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-10-01 15:21:20.693648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:22.279 [2024-10-01 15:21:20.693660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.766 ms 00:21:22.279 [2024-10-01 15:21:20.693671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-10-01 15:21:20.693773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.279 [2024-10-01 15:21:20.693785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:22.279 [2024-10-01 15:21:20.693804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:21:22.279 [2024-10-01 15:21:20.693814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-10-01 15:21:20.700119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.279 [2024-10-01 15:21:20.700283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.279 [2024-10-01 15:21:20.700407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.279 [2024-10-01 15:21:20.700448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-10-01 15:21:20.700555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.279 [2024-10-01 15:21:20.700591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.279 [2024-10-01 15:21:20.700625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.279 [2024-10-01 15:21:20.700718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-10-01 15:21:20.700853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.279 [2024-10-01 15:21:20.700908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.279 [2024-10-01 15:21:20.700945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.279 [2024-10-01 15:21:20.701050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-10-01 15:21:20.701079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.279 [2024-10-01 15:21:20.701090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.279 [2024-10-01 15:21:20.701111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.279 [2024-10-01 15:21:20.701121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-10-01 15:21:20.715308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.279 [2024-10-01 15:21:20.715368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.279 [2024-10-01 15:21:20.715381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.279 [2024-10-01 15:21:20.715392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-10-01 15:21:20.723900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.279 [2024-10-01 15:21:20.723954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.279 [2024-10-01 15:21:20.723969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.279 [2024-10-01 15:21:20.723980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.279 [2024-10-01 15:21:20.724071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.279 [2024-10-01 15:21:20.724085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.280 [2024-10-01 15:21:20.724100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.280 [2024-10-01 15:21:20.724111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.280 [2024-10-01 15:21:20.724137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.280 [2024-10-01 15:21:20.724149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.280 [2024-10-01 15:21:20.724160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.280 [2024-10-01 15:21:20.724198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.280 [2024-10-01 15:21:20.724311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.280 [2024-10-01 15:21:20.724325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.280 [2024-10-01 15:21:20.724336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.280 [2024-10-01 15:21:20.724352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.280 [2024-10-01 15:21:20.724397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.280 [2024-10-01 15:21:20.724410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:22.280 [2024-10-01 15:21:20.724421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.280 [2024-10-01 15:21:20.724431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.280 [2024-10-01 15:21:20.724471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.280 [2024-10-01 15:21:20.724482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.280 [2024-10-01 15:21:20.724493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.280 [2024-10-01 15:21:20.724508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.280 [2024-10-01 15:21:20.724553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.280 [2024-10-01 15:21:20.724565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.280 [2024-10-01 15:21:20.724576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.280 [2024-10-01 15:21:20.724587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.280 [2024-10-01 15:21:20.724726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 175.806 ms, result 0 00:21:22.850 00:21:22.850 00:21:22.850 15:21:21 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:23.125 [2024-10-01 15:21:21.445546] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:21:23.125 [2024-10-01 15:21:21.445702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88731 ] 00:21:23.125 [2024-10-01 15:21:21.627954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.384 [2024-10-01 15:21:21.680696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.384 [2024-10-01 15:21:21.786500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.384 [2024-10-01 15:21:21.786577] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.644 [2024-10-01 15:21:21.946831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.644 [2024-10-01 15:21:21.946898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:23.644 [2024-10-01 15:21:21.946925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:23.644 [2024-10-01 15:21:21.946942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.644 [2024-10-01 15:21:21.947013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.644 [2024-10-01 15:21:21.947027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.644 [2024-10-01 15:21:21.947037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:23.644 [2024-10-01 15:21:21.947056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.644 [2024-10-01 15:21:21.947076] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:23.644 [2024-10-01 15:21:21.947326] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:23.644 [2024-10-01 15:21:21.947346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.644 [2024-10-01 15:21:21.947364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.644 [2024-10-01 15:21:21.947374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:21:23.644 [2024-10-01 15:21:21.947384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.644 [2024-10-01 15:21:21.948834] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:23.644 [2024-10-01 15:21:21.951549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.644 [2024-10-01 15:21:21.951713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:23.644 [2024-10-01 15:21:21.951804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.720 ms 00:21:23.644 [2024-10-01 15:21:21.951843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.644 [2024-10-01 15:21:21.951929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.644 [2024-10-01 15:21:21.952043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:23.644 [2024-10-01 15:21:21.952142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:23.644 [2024-10-01 15:21:21.952187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.644 [2024-10-01 15:21:21.959119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.644 [2024-10-01 15:21:21.959277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.644 [2024-10-01 15:21:21.959359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.827 ms 00:21:23.644 [2024-10-01 15:21:21.959396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.645 [2024-10-01 15:21:21.959525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.645 [2024-10-01 15:21:21.959624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.645 [2024-10-01 15:21:21.959680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:23.645 [2024-10-01 15:21:21.959712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.645 [2024-10-01 15:21:21.959839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.645 [2024-10-01 15:21:21.959880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:23.645 [2024-10-01 15:21:21.959963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:23.645 [2024-10-01 15:21:21.960000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.645 [2024-10-01 15:21:21.960055] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:23.645 [2024-10-01 15:21:21.961918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.645 [2024-10-01 15:21:21.962057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.645 [2024-10-01 15:21:21.962195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:21:23.645 [2024-10-01 15:21:21.962233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.645 [2024-10-01 15:21:21.962276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.645 [2024-10-01 15:21:21.962289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:23.645 [2024-10-01 15:21:21.962311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:23.645 [2024-10-01 15:21:21.962321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.645 [2024-10-01 15:21:21.962354] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:23.645 [2024-10-01 15:21:21.962381] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:23.645 [2024-10-01 15:21:21.962418] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:23.645 [2024-10-01 15:21:21.962436] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:23.645 [2024-10-01 15:21:21.962529] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:23.645 [2024-10-01 15:21:21.962543] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:23.645 [2024-10-01 15:21:21.962557] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:23.645 [2024-10-01 15:21:21.962578] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:23.645 [2024-10-01 15:21:21.962602] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:23.645 [2024-10-01 15:21:21.962614] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:23.645 [2024-10-01 15:21:21.962624] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:23.645 [2024-10-01 15:21:21.962634] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:23.645 [2024-10-01 15:21:21.962644] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:23.645 [2024-10-01 15:21:21.962655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.645 [2024-10-01 15:21:21.962666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:23.645 [2024-10-01 15:21:21.962677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:21:23.645 [2024-10-01 15:21:21.962687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.645 [2024-10-01 15:21:21.962761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.645 [2024-10-01 15:21:21.962781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:23.645 [2024-10-01 15:21:21.962795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:23.645 [2024-10-01 15:21:21.962805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.645 [2024-10-01 15:21:21.962896] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:23.645 [2024-10-01 15:21:21.962909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:23.645 [2024-10-01 15:21:21.962920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.645 [2024-10-01 15:21:21.962944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.645 [2024-10-01 15:21:21.962956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:23.645 [2024-10-01 15:21:21.962965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:23.645 [2024-10-01 15:21:21.962975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:23.645 [2024-10-01 15:21:21.962984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:23.645 [2024-10-01 15:21:21.962994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.645 [2024-10-01 15:21:21.963014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:23.645 [2024-10-01 15:21:21.963024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:23.645 [2024-10-01 15:21:21.963034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.645 [2024-10-01 15:21:21.963044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:23.645 [2024-10-01 15:21:21.963054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:23.645 [2024-10-01 15:21:21.963064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:23.645 [2024-10-01 15:21:21.963083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:23.645 [2024-10-01 15:21:21.963093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:23.645 [2024-10-01 15:21:21.963115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.645 [2024-10-01 15:21:21.963134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:23.645 [2024-10-01 15:21:21.963144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.645 [2024-10-01 15:21:21.963163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:23.645 [2024-10-01 15:21:21.963186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.645 [2024-10-01 15:21:21.963206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:23.645 [2024-10-01 15:21:21.963215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.645 [2024-10-01 15:21:21.963235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:23.645 [2024-10-01 15:21:21.963244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.645 [2024-10-01 15:21:21.963263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:23.645 [2024-10-01 15:21:21.963276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:23.645 [2024-10-01 15:21:21.963286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.645 [2024-10-01 15:21:21.963295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:23.645 [2024-10-01 15:21:21.963305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:23.645 [2024-10-01 15:21:21.963314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:23.645 [2024-10-01 15:21:21.963334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:23.645 [2024-10-01 15:21:21.963343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963352] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:23.645 [2024-10-01 15:21:21.963363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:23.645 [2024-10-01 15:21:21.963383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.645 [2024-10-01 15:21:21.963398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.645 [2024-10-01 15:21:21.963409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:23.645 [2024-10-01 15:21:21.963419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:23.645 [2024-10-01 15:21:21.963429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:23.645 [2024-10-01 15:21:21.963438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:23.645 [2024-10-01 15:21:21.963451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:23.645 [2024-10-01 15:21:21.963461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:23.645 [2024-10-01 15:21:21.963472] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:23.645 [2024-10-01 15:21:21.963484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.645 [2024-10-01 15:21:21.963496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:23.645 [2024-10-01 15:21:21.963507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:23.645 [2024-10-01 15:21:21.963518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:23.645 [2024-10-01 15:21:21.963528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:23.645 [2024-10-01 15:21:21.963539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:23.645 [2024-10-01 15:21:21.963550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:23.645 [2024-10-01 15:21:21.963561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:23.645 [2024-10-01 15:21:21.963571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:23.645 [2024-10-01 15:21:21.963582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:23.645 [2024-10-01 15:21:21.963593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:23.646 [2024-10-01 15:21:21.963604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:23.646 [2024-10-01 15:21:21.963625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:23.646 [2024-10-01 15:21:21.963640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:23.646 [2024-10-01 15:21:21.963667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:23.646 [2024-10-01 15:21:21.963678] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:23.646 [2024-10-01 15:21:21.963691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.646 [2024-10-01 15:21:21.963704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:23.646 [2024-10-01 15:21:21.963715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:23.646 [2024-10-01 15:21:21.963727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:23.646 [2024-10-01 15:21:21.963739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:23.646 [2024-10-01 15:21:21.963751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:21.963763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:23.646 [2024-10-01 15:21:21.963774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:21:23.646 [2024-10-01 15:21:21.963784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:21.986401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:21.986479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.646 [2024-10-01 15:21:21.986499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.585 ms 00:21:23.646 [2024-10-01 15:21:21.986514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:21.986659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:21.986688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:23.646 [2024-10-01 15:21:21.986703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:23.646 [2024-10-01 15:21:21.986717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:21.998745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:21.998794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.646 [2024-10-01 15:21:21.998809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.944 ms 00:21:23.646 [2024-10-01 15:21:21.998820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:21.998869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:21.998881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.646 [2024-10-01 15:21:21.998893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:23.646 [2024-10-01 15:21:21.998903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:21.999435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:21.999464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.646 [2024-10-01 15:21:21.999475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:21:23.646 [2024-10-01 15:21:21.999486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:21.999609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:21.999634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.646 [2024-10-01 15:21:21.999651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:21:23.646 [2024-10-01 15:21:21.999662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.005764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.005917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.646 [2024-10-01 15:21:22.005947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.083 ms 00:21:23.646 [2024-10-01 15:21:22.005957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.008633] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:23.646 [2024-10-01 15:21:22.008671] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:23.646 [2024-10-01 15:21:22.008695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.008705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:23.646 [2024-10-01 15:21:22.008717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:21:23.646 [2024-10-01 15:21:22.008727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.022722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.022856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:23.646 [2024-10-01 15:21:22.023010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.978 ms 00:21:23.646 [2024-10-01 15:21:22.023050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.024825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.024956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:23.646 [2024-10-01 15:21:22.025042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.708 ms 00:21:23.646 [2024-10-01 15:21:22.025080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.026418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.026542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:23.646 [2024-10-01 15:21:22.026561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:21:23.646 [2024-10-01 15:21:22.026572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.026896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.026912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:23.646 [2024-10-01 15:21:22.026923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:21:23.646 [2024-10-01 15:21:22.026933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.047744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.047993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:23.646 [2024-10-01 15:21:22.048020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.815 ms 00:21:23.646 [2024-10-01 15:21:22.048031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.055052] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:23.646 [2024-10-01 15:21:22.058695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.058731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:23.646 [2024-10-01 15:21:22.058750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.611 ms 00:21:23.646 [2024-10-01 15:21:22.058761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.058873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.058888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:23.646 [2024-10-01 15:21:22.058899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:23.646 [2024-10-01 15:21:22.058910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.060688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.060727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:23.646 [2024-10-01 15:21:22.060751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:21:23.646 [2024-10-01 15:21:22.060766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.060797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.060809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:23.646 [2024-10-01 15:21:22.060820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:23.646 [2024-10-01 15:21:22.060838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.060880] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:23.646 [2024-10-01 15:21:22.060893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.060904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:23.646 [2024-10-01 15:21:22.060915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:23.646 [2024-10-01 15:21:22.060925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.064812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.064859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:23.646 [2024-10-01 15:21:22.064882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.870 ms 00:21:23.646 [2024-10-01 15:21:22.064900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.064973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.646 [2024-10-01 15:21:22.064993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:23.646 [2024-10-01 15:21:22.065004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:23.646 [2024-10-01 15:21:22.065015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.646 [2024-10-01 15:21:22.066423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.277 ms, result 0 00:21:56.964  Copying: 31/1024 [MB] (31 MBps) Copying: 64/1024 [MB] (32 MBps) Copying: 99/1024 [MB] (34 MBps) Copying: 135/1024 [MB] (35 MBps) Copying: 170/1024 [MB] (35 MBps) Copying: 202/1024 [MB] (32 MBps) Copying: 235/1024 [MB] (32 MBps) Copying: 268/1024 [MB] (33 MBps) Copying: 301/1024 [MB] (33 MBps) Copying: 335/1024 [MB] (33 MBps) Copying: 367/1024 [MB] (31 MBps) Copying: 400/1024 [MB] (33 MBps) Copying: 431/1024 [MB] (30 MBps) Copying: 462/1024 [MB] (31 MBps) Copying: 492/1024 [MB] (30 MBps) Copying: 523/1024 [MB] (30 MBps) Copying: 553/1024 [MB] (30 MBps) Copying: 582/1024 [MB] (29 MBps) Copying: 612/1024 [MB] (29 MBps) Copying: 642/1024 [MB] (30 MBps) Copying: 671/1024 [MB] (29 MBps) Copying: 700/1024 [MB] (29 MBps) Copying: 730/1024 [MB] (30 MBps) Copying: 759/1024 [MB] (29 MBps) Copying: 789/1024 [MB] (29 MBps) Copying: 818/1024 [MB] (29 MBps) Copying: 849/1024 [MB] (30 MBps) Copying: 877/1024 [MB] (27 MBps) Copying: 906/1024 [MB] (29 MBps) Copying: 936/1024 [MB] (29 MBps) Copying: 966/1024 [MB] (30 MBps) Copying: 996/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-10-01 15:21:55.352239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.964 [2024-10-01 15:21:55.352308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:56.964 [2024-10-01 15:21:55.352330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:56.964 [2024-10-01 15:21:55.352344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.964 [2024-10-01 15:21:55.352385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:56.964 [2024-10-01 15:21:55.353172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.964 [2024-10-01 15:21:55.353194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:56.964 [2024-10-01 15:21:55.353206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:21:56.964 [2024-10-01 15:21:55.353216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.964 [2024-10-01 15:21:55.353413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.964 [2024-10-01 15:21:55.353427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:56.964 [2024-10-01 15:21:55.353437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:21:56.965 [2024-10-01 15:21:55.353446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.965 [2024-10-01 15:21:55.357434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.965 [2024-10-01 15:21:55.357611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:56.965 [2024-10-01 15:21:55.357635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.975 ms 00:21:56.965 [2024-10-01 15:21:55.357647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.965 [2024-10-01 15:21:55.363049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.965 [2024-10-01 15:21:55.363096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:56.965 [2024-10-01 15:21:55.363110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.364 ms 00:21:56.965 [2024-10-01 15:21:55.363120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.965 [2024-10-01 15:21:55.364611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.965 [2024-10-01 15:21:55.364758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:56.965 [2024-10-01 15:21:55.364779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:21:56.965 [2024-10-01 15:21:55.364790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.965 [2024-10-01 15:21:55.368743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.965 [2024-10-01 15:21:55.368784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:56.965 [2024-10-01 15:21:55.368796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:21:56.965 [2024-10-01 15:21:55.368808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.965 [2024-10-01 15:21:55.498152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.965 [2024-10-01 15:21:55.498265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:56.965 [2024-10-01 15:21:55.498283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 129.514 ms 00:21:56.965 [2024-10-01 15:21:55.498295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.965 [2024-10-01 15:21:55.500628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.965 [2024-10-01 15:21:55.500689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:56.965 [2024-10-01 15:21:55.500715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.314 ms 00:21:56.965 [2024-10-01 15:21:55.500733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.965 [2024-10-01 15:21:55.502120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.965 [2024-10-01 15:21:55.502192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:56.965 [2024-10-01 15:21:55.502207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:21:56.965 [2024-10-01 15:21:55.502218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.965 [2024-10-01 15:21:55.503297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.965 [2024-10-01 15:21:55.503451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:56.965 [2024-10-01 15:21:55.503472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:21:56.965 [2024-10-01 15:21:55.503483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.965 [2024-10-01 15:21:55.504508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.965 [2024-10-01 15:21:55.504538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:56.965 [2024-10-01 15:21:55.504551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:21:56.965 [2024-10-01 15:21:55.504561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.965 [2024-10-01 15:21:55.504589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:56.965 [2024-10-01 15:21:55.504606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:21:56.965 [2024-10-01 15:21:55.504632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.504992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:56.965 [2024-10-01 15:21:55.505279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:56.966 [2024-10-01 15:21:55.505733] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:56.966 [2024-10-01 15:21:55.505743] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a75a8b7f-e542-4dc7-afc8-62fa7d0b6a59 00:21:56.966 [2024-10-01 15:21:55.505754] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:21:56.966 [2024-10-01 15:21:55.505764] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15808 00:21:56.966 [2024-10-01 15:21:55.505773] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14848 00:21:56.966 [2024-10-01 15:21:55.505792] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0647 00:21:56.966 [2024-10-01 15:21:55.505801] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:56.966 [2024-10-01 15:21:55.505811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:56.966 [2024-10-01 15:21:55.505821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:56.966 [2024-10-01 15:21:55.505830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:56.966 [2024-10-01 15:21:55.505839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:56.966 [2024-10-01 15:21:55.505849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.966 [2024-10-01 15:21:55.505859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:56.966 [2024-10-01 15:21:55.505870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:21:56.966 [2024-10-01 15:21:55.505880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.966 [2024-10-01 15:21:55.507680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.966 [2024-10-01 15:21:55.507716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:56.966 [2024-10-01 15:21:55.507733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:21:56.966 [2024-10-01 15:21:55.507743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.966 [2024-10-01 15:21:55.507849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.966 [2024-10-01 15:21:55.507861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:56.966 [2024-10-01 15:21:55.507873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:56.966 [2024-10-01 15:21:55.507891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.514056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.514215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:57.225 [2024-10-01 15:21:55.514237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.514247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.514305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.514316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:57.225 [2024-10-01 15:21:55.514327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.514337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.514418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.514436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:57.225 [2024-10-01 15:21:55.514448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.514459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.514475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.514486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:57.225 [2024-10-01 15:21:55.514496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.514506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.528506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.528723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:57.225 [2024-10-01 15:21:55.528744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.528784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.537143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.537202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:57.225 [2024-10-01 15:21:55.537215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.537227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.537290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.537302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.225 [2024-10-01 15:21:55.537319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.537329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.537356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.537367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.225 [2024-10-01 15:21:55.537377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.537386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.537461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.537475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.225 [2024-10-01 15:21:55.537485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.537499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.537541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.537554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:57.225 [2024-10-01 15:21:55.537565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.537576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.537613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.537625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.225 [2024-10-01 15:21:55.537634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.537648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.225 [2024-10-01 15:21:55.537689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.225 [2024-10-01 15:21:55.537701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.225 [2024-10-01 15:21:55.537711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.225 [2024-10-01 15:21:55.537721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.226 [2024-10-01 15:21:55.537837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 185.874 ms, result 0 00:21:57.485 00:21:57.485 00:21:57.485 15:21:55 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:59.390 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 87364 00:21:59.390 15:21:57 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 87364 ']' 00:21:59.390 15:21:57 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 87364 00:21:59.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (87364) - No such process 00:21:59.390 Process with pid 87364 is not found 00:21:59.390 15:21:57 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 87364 is not found' 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:59.390 Remove shared memory files 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:59.390 15:21:57 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:59.390 ************************************ 00:21:59.390 END TEST ftl_restore 00:21:59.390 ************************************ 00:21:59.390 00:21:59.390 real 2m46.054s 00:21:59.390 user 2m33.901s 00:21:59.390 sys 0m13.863s 00:21:59.390 15:21:57 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.390 15:21:57 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:59.390 15:21:57 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:59.390 15:21:57 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:59.390 15:21:57 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.390 15:21:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:59.390 ************************************ 00:21:59.390 START TEST ftl_dirty_shutdown 00:21:59.390 ************************************ 00:21:59.390 15:21:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:59.390 * Looking for test storage... 00:21:59.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:59.390 15:21:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:59.390 15:21:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:59.390 15:21:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:59.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.649 --rc genhtml_branch_coverage=1 00:21:59.649 --rc genhtml_function_coverage=1 00:21:59.649 --rc genhtml_legend=1 00:21:59.649 --rc geninfo_all_blocks=1 00:21:59.649 --rc geninfo_unexecuted_blocks=1 00:21:59.649 00:21:59.649 ' 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:59.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.649 --rc genhtml_branch_coverage=1 00:21:59.649 --rc genhtml_function_coverage=1 00:21:59.649 --rc genhtml_legend=1 00:21:59.649 --rc geninfo_all_blocks=1 00:21:59.649 --rc geninfo_unexecuted_blocks=1 00:21:59.649 00:21:59.649 ' 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:59.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.649 --rc genhtml_branch_coverage=1 00:21:59.649 --rc genhtml_function_coverage=1 00:21:59.649 --rc genhtml_legend=1 00:21:59.649 --rc geninfo_all_blocks=1 00:21:59.649 --rc geninfo_unexecuted_blocks=1 00:21:59.649 00:21:59.649 ' 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:59.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.649 --rc genhtml_branch_coverage=1 00:21:59.649 --rc genhtml_function_coverage=1 00:21:59.649 --rc genhtml_legend=1 00:21:59.649 --rc geninfo_all_blocks=1 00:21:59.649 --rc geninfo_unexecuted_blocks=1 00:21:59.649 00:21:59.649 ' 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:59.649 15:21:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:59.649 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:59.649 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89167 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89167 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:59.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89167 ']' 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.650 15:21:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:59.650 [2024-10-01 15:21:58.137214] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:21:59.650 [2024-10-01 15:21:58.137541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89167 ] 00:21:59.909 [2024-10-01 15:21:58.308316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.909 [2024-10-01 15:21:58.373991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.477 15:21:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.477 15:21:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:00.477 15:21:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:00.477 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:00.477 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:00.477 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:00.477 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:00.477 15:21:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:01.044 { 00:22:01.044 "name": "nvme0n1", 00:22:01.044 "aliases": [ 00:22:01.044 "52e0f4f3-84f9-44c1-a638-c1128db4c6ca" 00:22:01.044 ], 00:22:01.044 "product_name": "NVMe disk", 00:22:01.044 "block_size": 4096, 00:22:01.044 "num_blocks": 1310720, 00:22:01.044 "uuid": "52e0f4f3-84f9-44c1-a638-c1128db4c6ca", 00:22:01.044 "numa_id": -1, 00:22:01.044 "assigned_rate_limits": { 00:22:01.044 "rw_ios_per_sec": 0, 00:22:01.044 "rw_mbytes_per_sec": 0, 00:22:01.044 "r_mbytes_per_sec": 0, 00:22:01.044 "w_mbytes_per_sec": 0 00:22:01.044 }, 00:22:01.044 "claimed": true, 00:22:01.044 "claim_type": "read_many_write_one", 00:22:01.044 "zoned": false, 00:22:01.044 "supported_io_types": { 00:22:01.044 "read": true, 00:22:01.044 "write": true, 00:22:01.044 "unmap": true, 00:22:01.044 "flush": true, 00:22:01.044 "reset": true, 00:22:01.044 "nvme_admin": true, 00:22:01.044 "nvme_io": true, 00:22:01.044 "nvme_io_md": false, 00:22:01.044 "write_zeroes": true, 00:22:01.044 "zcopy": false, 00:22:01.044 "get_zone_info": false, 00:22:01.044 "zone_management": false, 00:22:01.044 "zone_append": false, 00:22:01.044 "compare": true, 00:22:01.044 "compare_and_write": false, 00:22:01.044 "abort": true, 00:22:01.044 "seek_hole": false, 00:22:01.044 "seek_data": false, 00:22:01.044 "copy": true, 00:22:01.044 "nvme_iov_md": false 00:22:01.044 }, 00:22:01.044 "driver_specific": { 00:22:01.044 "nvme": [ 00:22:01.044 { 00:22:01.044 "pci_address": "0000:00:11.0", 00:22:01.044 "trid": { 00:22:01.044 "trtype": "PCIe", 00:22:01.044 "traddr": "0000:00:11.0" 00:22:01.044 }, 00:22:01.044 "ctrlr_data": { 00:22:01.044 "cntlid": 0, 00:22:01.044 "vendor_id": "0x1b36", 00:22:01.044 "model_number": "QEMU NVMe Ctrl", 00:22:01.044 "serial_number": "12341", 00:22:01.044 "firmware_revision": "8.0.0", 00:22:01.044 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:01.044 "oacs": { 00:22:01.044 "security": 0, 00:22:01.044 "format": 1, 00:22:01.044 "firmware": 0, 00:22:01.044 "ns_manage": 1 00:22:01.044 }, 00:22:01.044 "multi_ctrlr": false, 00:22:01.044 "ana_reporting": false 00:22:01.044 }, 00:22:01.044 "vs": { 00:22:01.044 "nvme_version": "1.4" 00:22:01.044 }, 00:22:01.044 "ns_data": { 00:22:01.044 "id": 1, 00:22:01.044 "can_share": false 00:22:01.044 } 00:22:01.044 } 00:22:01.044 ], 00:22:01.044 "mp_policy": "active_passive" 00:22:01.044 } 00:22:01.044 } 00:22:01.044 ]' 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:01.044 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:01.345 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:01.345 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:01.345 15:21:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:01.345 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:01.345 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:01.345 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:01.345 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:01.345 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:01.345 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=7c03b76c-354b-443e-a3f3-df6b58c8ae8b 00:22:01.346 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:01.346 15:21:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7c03b76c-354b-443e-a3f3-df6b58c8ae8b 00:22:01.604 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:01.863 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=9faad464-9cba-4c1e-9ba8-50cb5f91c523 00:22:01.863 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9faad464-9cba-4c1e-9ba8-50cb5f91c523 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:02.121 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:02.379 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:02.379 { 00:22:02.379 "name": "2fe15730-6427-43e9-9c65-c2c90f0b46f9", 00:22:02.379 "aliases": [ 00:22:02.379 "lvs/nvme0n1p0" 00:22:02.379 ], 00:22:02.379 "product_name": "Logical Volume", 00:22:02.379 "block_size": 4096, 00:22:02.379 "num_blocks": 26476544, 00:22:02.379 "uuid": "2fe15730-6427-43e9-9c65-c2c90f0b46f9", 00:22:02.379 "assigned_rate_limits": { 00:22:02.379 "rw_ios_per_sec": 0, 00:22:02.379 "rw_mbytes_per_sec": 0, 00:22:02.379 "r_mbytes_per_sec": 0, 00:22:02.379 "w_mbytes_per_sec": 0 00:22:02.379 }, 00:22:02.379 "claimed": false, 00:22:02.379 "zoned": false, 00:22:02.379 "supported_io_types": { 00:22:02.379 "read": true, 00:22:02.379 "write": true, 00:22:02.379 "unmap": true, 00:22:02.379 "flush": false, 00:22:02.379 "reset": true, 00:22:02.379 "nvme_admin": false, 00:22:02.379 "nvme_io": false, 00:22:02.379 "nvme_io_md": false, 00:22:02.379 "write_zeroes": true, 00:22:02.379 "zcopy": false, 00:22:02.379 "get_zone_info": false, 00:22:02.379 "zone_management": false, 00:22:02.379 "zone_append": false, 00:22:02.379 "compare": false, 00:22:02.379 "compare_and_write": false, 00:22:02.379 "abort": false, 00:22:02.379 "seek_hole": true, 00:22:02.379 "seek_data": true, 00:22:02.379 "copy": false, 00:22:02.379 "nvme_iov_md": false 00:22:02.379 }, 00:22:02.379 "driver_specific": { 00:22:02.379 "lvol": { 00:22:02.379 "lvol_store_uuid": "9faad464-9cba-4c1e-9ba8-50cb5f91c523", 00:22:02.379 "base_bdev": "nvme0n1", 00:22:02.379 "thin_provision": true, 00:22:02.379 "num_allocated_clusters": 0, 00:22:02.379 "snapshot": false, 00:22:02.379 "clone": false, 00:22:02.379 "esnap_clone": false 00:22:02.379 } 00:22:02.379 } 00:22:02.379 } 00:22:02.379 ]' 00:22:02.379 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:02.379 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:02.379 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:02.379 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:02.379 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:02.379 15:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:02.379 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:02.379 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:02.379 15:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:02.637 15:22:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:02.637 15:22:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:02.637 15:22:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:02.637 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:02.637 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:02.637 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:02.637 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:02.637 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:02.897 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:02.897 { 00:22:02.897 "name": "2fe15730-6427-43e9-9c65-c2c90f0b46f9", 00:22:02.897 "aliases": [ 00:22:02.897 "lvs/nvme0n1p0" 00:22:02.897 ], 00:22:02.897 "product_name": "Logical Volume", 00:22:02.897 "block_size": 4096, 00:22:02.897 "num_blocks": 26476544, 00:22:02.897 "uuid": "2fe15730-6427-43e9-9c65-c2c90f0b46f9", 00:22:02.897 "assigned_rate_limits": { 00:22:02.897 "rw_ios_per_sec": 0, 00:22:02.897 "rw_mbytes_per_sec": 0, 00:22:02.897 "r_mbytes_per_sec": 0, 00:22:02.897 "w_mbytes_per_sec": 0 00:22:02.897 }, 00:22:02.897 "claimed": false, 00:22:02.897 "zoned": false, 00:22:02.897 "supported_io_types": { 00:22:02.897 "read": true, 00:22:02.897 "write": true, 00:22:02.897 "unmap": true, 00:22:02.897 "flush": false, 00:22:02.897 "reset": true, 00:22:02.897 "nvme_admin": false, 00:22:02.897 "nvme_io": false, 00:22:02.897 "nvme_io_md": false, 00:22:02.897 "write_zeroes": true, 00:22:02.897 "zcopy": false, 00:22:02.897 "get_zone_info": false, 00:22:02.898 "zone_management": false, 00:22:02.898 "zone_append": false, 00:22:02.898 "compare": false, 00:22:02.898 "compare_and_write": false, 00:22:02.898 "abort": false, 00:22:02.898 "seek_hole": true, 00:22:02.898 "seek_data": true, 00:22:02.898 "copy": false, 00:22:02.898 "nvme_iov_md": false 00:22:02.898 }, 00:22:02.898 "driver_specific": { 00:22:02.898 "lvol": { 00:22:02.898 "lvol_store_uuid": "9faad464-9cba-4c1e-9ba8-50cb5f91c523", 00:22:02.898 "base_bdev": "nvme0n1", 00:22:02.898 "thin_provision": true, 00:22:02.898 "num_allocated_clusters": 0, 00:22:02.898 "snapshot": false, 00:22:02.898 "clone": false, 00:22:02.898 "esnap_clone": false 00:22:02.898 } 00:22:02.898 } 00:22:02.898 } 00:22:02.898 ]' 00:22:02.898 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:02.898 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:02.898 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:02.898 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:02.898 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:02.898 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:02.898 15:22:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:02.898 15:22:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:03.157 15:22:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:03.157 15:22:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:03.157 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:03.157 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:03.157 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:03.157 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:03.157 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2fe15730-6427-43e9-9c65-c2c90f0b46f9 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:03.415 { 00:22:03.415 "name": "2fe15730-6427-43e9-9c65-c2c90f0b46f9", 00:22:03.415 "aliases": [ 00:22:03.415 "lvs/nvme0n1p0" 00:22:03.415 ], 00:22:03.415 "product_name": "Logical Volume", 00:22:03.415 "block_size": 4096, 00:22:03.415 "num_blocks": 26476544, 00:22:03.415 "uuid": "2fe15730-6427-43e9-9c65-c2c90f0b46f9", 00:22:03.415 "assigned_rate_limits": { 00:22:03.415 "rw_ios_per_sec": 0, 00:22:03.415 "rw_mbytes_per_sec": 0, 00:22:03.415 "r_mbytes_per_sec": 0, 00:22:03.415 "w_mbytes_per_sec": 0 00:22:03.415 }, 00:22:03.415 "claimed": false, 00:22:03.415 "zoned": false, 00:22:03.415 "supported_io_types": { 00:22:03.415 "read": true, 00:22:03.415 "write": true, 00:22:03.415 "unmap": true, 00:22:03.415 "flush": false, 00:22:03.415 "reset": true, 00:22:03.415 "nvme_admin": false, 00:22:03.415 "nvme_io": false, 00:22:03.415 "nvme_io_md": false, 00:22:03.415 "write_zeroes": true, 00:22:03.415 "zcopy": false, 00:22:03.415 "get_zone_info": false, 00:22:03.415 "zone_management": false, 00:22:03.415 "zone_append": false, 00:22:03.415 "compare": false, 00:22:03.415 "compare_and_write": false, 00:22:03.415 "abort": false, 00:22:03.415 "seek_hole": true, 00:22:03.415 "seek_data": true, 00:22:03.415 "copy": false, 00:22:03.415 "nvme_iov_md": false 00:22:03.415 }, 00:22:03.415 "driver_specific": { 00:22:03.415 "lvol": { 00:22:03.415 "lvol_store_uuid": "9faad464-9cba-4c1e-9ba8-50cb5f91c523", 00:22:03.415 "base_bdev": "nvme0n1", 00:22:03.415 "thin_provision": true, 00:22:03.415 "num_allocated_clusters": 0, 00:22:03.415 "snapshot": false, 00:22:03.415 "clone": false, 00:22:03.415 "esnap_clone": false 00:22:03.415 } 00:22:03.415 } 00:22:03.415 } 00:22:03.415 ]' 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2fe15730-6427-43e9-9c65-c2c90f0b46f9 --l2p_dram_limit 10' 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:03.415 15:22:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2fe15730-6427-43e9-9c65-c2c90f0b46f9 --l2p_dram_limit 10 -c nvc0n1p0 00:22:03.675 [2024-10-01 15:22:02.090395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.675 [2024-10-01 15:22:02.090654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:03.675 [2024-10-01 15:22:02.090682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:03.675 [2024-10-01 15:22:02.090696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.675 [2024-10-01 15:22:02.090790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.675 [2024-10-01 15:22:02.090806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.675 [2024-10-01 15:22:02.090817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:03.675 [2024-10-01 15:22:02.090834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.675 [2024-10-01 15:22:02.090866] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:03.676 [2024-10-01 15:22:02.091217] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:03.676 [2024-10-01 15:22:02.091239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.676 [2024-10-01 15:22:02.091252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.676 [2024-10-01 15:22:02.091273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:22:03.676 [2024-10-01 15:22:02.091286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.676 [2024-10-01 15:22:02.091365] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 49acedf4-6ec5-40c7-ad30-b97ab3bf08c2 00:22:03.676 [2024-10-01 15:22:02.092810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.676 [2024-10-01 15:22:02.092836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:03.676 [2024-10-01 15:22:02.092851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:03.676 [2024-10-01 15:22:02.092861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.676 [2024-10-01 15:22:02.100358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.676 [2024-10-01 15:22:02.100503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.676 [2024-10-01 15:22:02.100530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.459 ms 00:22:03.676 [2024-10-01 15:22:02.100541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.676 [2024-10-01 15:22:02.100638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.676 [2024-10-01 15:22:02.100651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.676 [2024-10-01 15:22:02.100665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:03.676 [2024-10-01 15:22:02.100677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.676 [2024-10-01 15:22:02.100758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.676 [2024-10-01 15:22:02.100770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:03.676 [2024-10-01 15:22:02.100784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:03.676 [2024-10-01 15:22:02.100793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.676 [2024-10-01 15:22:02.100822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:03.676 [2024-10-01 15:22:02.102658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.676 [2024-10-01 15:22:02.102691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.676 [2024-10-01 15:22:02.102707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.849 ms 00:22:03.676 [2024-10-01 15:22:02.102720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.676 [2024-10-01 15:22:02.102753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.676 [2024-10-01 15:22:02.102776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:03.676 [2024-10-01 15:22:02.102786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:03.676 [2024-10-01 15:22:02.102802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.676 [2024-10-01 15:22:02.102820] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:03.676 [2024-10-01 15:22:02.102957] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:03.676 [2024-10-01 15:22:02.102972] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:03.676 [2024-10-01 15:22:02.102988] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:03.676 [2024-10-01 15:22:02.103000] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:03.676 [2024-10-01 15:22:02.103015] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:03.676 [2024-10-01 15:22:02.103026] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:03.676 [2024-10-01 15:22:02.103045] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:03.676 [2024-10-01 15:22:02.103055] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:03.676 [2024-10-01 15:22:02.103068] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:03.676 [2024-10-01 15:22:02.103081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.676 [2024-10-01 15:22:02.103093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:03.676 [2024-10-01 15:22:02.103103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:22:03.676 [2024-10-01 15:22:02.103116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.676 [2024-10-01 15:22:02.103207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.676 [2024-10-01 15:22:02.103225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:03.676 [2024-10-01 15:22:02.103236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:03.676 [2024-10-01 15:22:02.103248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.676 [2024-10-01 15:22:02.103333] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:03.676 [2024-10-01 15:22:02.103350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:03.676 [2024-10-01 15:22:02.103360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.676 [2024-10-01 15:22:02.103374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:03.676 [2024-10-01 15:22:02.103396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:03.676 [2024-10-01 15:22:02.103417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:03.676 [2024-10-01 15:22:02.103426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.676 [2024-10-01 15:22:02.103447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:03.676 [2024-10-01 15:22:02.103459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:03.676 [2024-10-01 15:22:02.103468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.676 [2024-10-01 15:22:02.103484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:03.676 [2024-10-01 15:22:02.103494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:03.676 [2024-10-01 15:22:02.103506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:03.676 [2024-10-01 15:22:02.103528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:03.676 [2024-10-01 15:22:02.103537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:03.676 [2024-10-01 15:22:02.103557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.676 [2024-10-01 15:22:02.103578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:03.676 [2024-10-01 15:22:02.103590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.676 [2024-10-01 15:22:02.103621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:03.676 [2024-10-01 15:22:02.103630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.676 [2024-10-01 15:22:02.103651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:03.676 [2024-10-01 15:22:02.103665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.676 [2024-10-01 15:22:02.103686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:03.676 [2024-10-01 15:22:02.103695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.676 [2024-10-01 15:22:02.103715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:03.676 [2024-10-01 15:22:02.103727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:03.676 [2024-10-01 15:22:02.103736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.676 [2024-10-01 15:22:02.103748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:03.676 [2024-10-01 15:22:02.103757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:03.676 [2024-10-01 15:22:02.103770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:03.676 [2024-10-01 15:22:02.103791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:03.676 [2024-10-01 15:22:02.103800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103811] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:03.676 [2024-10-01 15:22:02.103822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:03.676 [2024-10-01 15:22:02.103837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.676 [2024-10-01 15:22:02.103854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.676 [2024-10-01 15:22:02.103868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:03.676 [2024-10-01 15:22:02.103878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:03.676 [2024-10-01 15:22:02.103889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:03.676 [2024-10-01 15:22:02.103899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:03.677 [2024-10-01 15:22:02.103910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:03.677 [2024-10-01 15:22:02.103920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:03.677 [2024-10-01 15:22:02.103936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:03.677 [2024-10-01 15:22:02.103948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.677 [2024-10-01 15:22:02.103962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:03.677 [2024-10-01 15:22:02.103972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:03.677 [2024-10-01 15:22:02.103985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:03.677 [2024-10-01 15:22:02.103996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:03.677 [2024-10-01 15:22:02.104013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:03.677 [2024-10-01 15:22:02.104024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:03.677 [2024-10-01 15:22:02.104041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:03.677 [2024-10-01 15:22:02.104052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:03.677 [2024-10-01 15:22:02.104065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:03.677 [2024-10-01 15:22:02.104075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:03.677 [2024-10-01 15:22:02.104087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:03.677 [2024-10-01 15:22:02.104097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:03.677 [2024-10-01 15:22:02.104110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:03.677 [2024-10-01 15:22:02.104121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:03.677 [2024-10-01 15:22:02.104133] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:03.677 [2024-10-01 15:22:02.104154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.677 [2024-10-01 15:22:02.104168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:03.677 [2024-10-01 15:22:02.104189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:03.677 [2024-10-01 15:22:02.104202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:03.677 [2024-10-01 15:22:02.104213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:03.677 [2024-10-01 15:22:02.104226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.677 [2024-10-01 15:22:02.104236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:03.677 [2024-10-01 15:22:02.104252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:22:03.677 [2024-10-01 15:22:02.104262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.677 [2024-10-01 15:22:02.104309] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:03.677 [2024-10-01 15:22:02.104321] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:06.209 [2024-10-01 15:22:04.552117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.552207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:06.209 [2024-10-01 15:22:04.552232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2451.774 ms 00:22:06.209 [2024-10-01 15:22:04.552243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.563548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.563603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.209 [2024-10-01 15:22:04.563630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.198 ms 00:22:06.209 [2024-10-01 15:22:04.563642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.563740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.563751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.209 [2024-10-01 15:22:04.563770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:06.209 [2024-10-01 15:22:04.563780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.574541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.574589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.209 [2024-10-01 15:22:04.574608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.707 ms 00:22:06.209 [2024-10-01 15:22:04.574618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.574661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.574672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.209 [2024-10-01 15:22:04.574688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:06.209 [2024-10-01 15:22:04.574699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.575190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.575217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.209 [2024-10-01 15:22:04.575232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:22:06.209 [2024-10-01 15:22:04.575242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.575351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.575361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.209 [2024-10-01 15:22:04.575375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:06.209 [2024-10-01 15:22:04.575387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.593437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.593671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.209 [2024-10-01 15:22:04.593710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.053 ms 00:22:06.209 [2024-10-01 15:22:04.593724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.602700] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:06.209 [2024-10-01 15:22:04.606020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.606060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:06.209 [2024-10-01 15:22:04.606077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.196 ms 00:22:06.209 [2024-10-01 15:22:04.606092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.664992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.665086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:06.209 [2024-10-01 15:22:04.665103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.949 ms 00:22:06.209 [2024-10-01 15:22:04.665120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.665354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.665374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:06.209 [2024-10-01 15:22:04.665386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:22:06.209 [2024-10-01 15:22:04.665400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.668779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.668828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:06.209 [2024-10-01 15:22:04.668850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.358 ms 00:22:06.209 [2024-10-01 15:22:04.668865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.671391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.671549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:06.209 [2024-10-01 15:22:04.671572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.485 ms 00:22:06.209 [2024-10-01 15:22:04.671586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.671906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.671933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:06.209 [2024-10-01 15:22:04.671946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:22:06.209 [2024-10-01 15:22:04.671962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.699099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.699367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:06.209 [2024-10-01 15:22:04.699392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.153 ms 00:22:06.209 [2024-10-01 15:22:04.699406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.704146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.704209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:06.209 [2024-10-01 15:22:04.704234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.674 ms 00:22:06.209 [2024-10-01 15:22:04.704249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.707626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.209 [2024-10-01 15:22:04.707686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:06.209 [2024-10-01 15:22:04.707699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.342 ms 00:22:06.209 [2024-10-01 15:22:04.707713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.209 [2024-10-01 15:22:04.711200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.210 [2024-10-01 15:22:04.711240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:06.210 [2024-10-01 15:22:04.711254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.454 ms 00:22:06.210 [2024-10-01 15:22:04.711271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.210 [2024-10-01 15:22:04.711315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.210 [2024-10-01 15:22:04.711330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:06.210 [2024-10-01 15:22:04.711342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:06.210 [2024-10-01 15:22:04.711355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.210 [2024-10-01 15:22:04.711430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.210 [2024-10-01 15:22:04.711453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:06.210 [2024-10-01 15:22:04.711465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:06.210 [2024-10-01 15:22:04.711489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.210 [2024-10-01 15:22:04.712615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2626.006 ms, result 0 00:22:06.210 { 00:22:06.210 "name": "ftl0", 00:22:06.210 "uuid": "49acedf4-6ec5-40c7-ad30-b97ab3bf08c2" 00:22:06.210 } 00:22:06.210 15:22:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:06.210 15:22:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:06.469 15:22:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:06.469 15:22:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:06.469 15:22:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:06.727 /dev/nbd0 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:06.727 1+0 records in 00:22:06.727 1+0 records out 00:22:06.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038227 s, 10.7 MB/s 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:06.727 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:06.728 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:06.728 15:22:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:06.728 15:22:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:06.987 [2024-10-01 15:22:05.298131] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:22:06.987 [2024-10-01 15:22:05.298285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89299 ] 00:22:06.987 [2024-10-01 15:22:05.467973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.987 [2024-10-01 15:22:05.510619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.634  Copying: 199/1024 [MB] (199 MBps) Copying: 391/1024 [MB] (191 MBps) Copying: 582/1024 [MB] (191 MBps) Copying: 782/1024 [MB] (199 MBps) Copying: 979/1024 [MB] (197 MBps) Copying: 1024/1024 [MB] (average 195 MBps) 00:22:12.634 00:22:12.634 15:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:14.535 15:22:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:14.535 [2024-10-01 15:22:12.866683] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:22:14.535 [2024-10-01 15:22:12.866840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89381 ] 00:22:14.535 [2024-10-01 15:22:13.037047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.793 [2024-10-01 15:22:13.085112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.626  Copying: 19/1024 [MB] (19 MBps) Copying: 38/1024 [MB] (18 MBps) Copying: 57/1024 [MB] (18 MBps) Copying: 75/1024 [MB] (18 MBps) Copying: 93/1024 [MB] (17 MBps) Copying: 111/1024 [MB] (18 MBps) Copying: 129/1024 [MB] (18 MBps) Copying: 147/1024 [MB] (18 MBps) Copying: 165/1024 [MB] (17 MBps) Copying: 183/1024 [MB] (17 MBps) Copying: 201/1024 [MB] (18 MBps) Copying: 219/1024 [MB] (17 MBps) Copying: 237/1024 [MB] (17 MBps) Copying: 255/1024 [MB] (17 MBps) Copying: 272/1024 [MB] (17 MBps) Copying: 290/1024 [MB] (17 MBps) Copying: 308/1024 [MB] (18 MBps) Copying: 326/1024 [MB] (17 MBps) Copying: 344/1024 [MB] (18 MBps) Copying: 362/1024 [MB] (18 MBps) Copying: 381/1024 [MB] (18 MBps) Copying: 399/1024 [MB] (18 MBps) Copying: 418/1024 [MB] (18 MBps) Copying: 436/1024 [MB] (18 MBps) Copying: 455/1024 [MB] (19 MBps) Copying: 475/1024 [MB] (19 MBps) Copying: 494/1024 [MB] (19 MBps) Copying: 513/1024 [MB] (19 MBps) Copying: 531/1024 [MB] (18 MBps) Copying: 549/1024 [MB] (17 MBps) Copying: 566/1024 [MB] (17 MBps) Copying: 584/1024 [MB] (17 MBps) Copying: 601/1024 [MB] (17 MBps) Copying: 620/1024 [MB] (18 MBps) Copying: 638/1024 [MB] (17 MBps) Copying: 655/1024 [MB] (17 MBps) Copying: 673/1024 [MB] (18 MBps) Copying: 691/1024 [MB] (18 MBps) Copying: 711/1024 [MB] (19 MBps) Copying: 729/1024 [MB] (18 MBps) Copying: 747/1024 [MB] (18 MBps) Copying: 766/1024 [MB] (18 MBps) Copying: 785/1024 [MB] (18 MBps) Copying: 804/1024 [MB] (19 MBps) Copying: 824/1024 [MB] (19 MBps) Copying: 843/1024 [MB] (19 MBps) Copying: 862/1024 [MB] (19 MBps) Copying: 882/1024 [MB] (19 MBps) Copying: 901/1024 [MB] (19 MBps) Copying: 920/1024 [MB] (18 MBps) Copying: 939/1024 [MB] (18 MBps) Copying: 958/1024 [MB] (19 MBps) Copying: 977/1024 [MB] (18 MBps) Copying: 996/1024 [MB] (19 MBps) Copying: 1014/1024 [MB] (18 MBps) Copying: 1024/1024 [MB] (average 18 MBps) 00:23:10.626 00:23:10.626 15:23:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:10.626 15:23:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:10.626 15:23:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:10.888 [2024-10-01 15:23:09.318700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.318943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:10.888 [2024-10-01 15:23:09.318976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:10.888 [2024-10-01 15:23:09.318988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.319039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:10.888 [2024-10-01 15:23:09.319761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.319801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:10.888 [2024-10-01 15:23:09.319813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:23:10.888 [2024-10-01 15:23:09.319830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.321497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.321554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:10.888 [2024-10-01 15:23:09.321569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:23:10.888 [2024-10-01 15:23:09.321583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.339760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.339823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:10.888 [2024-10-01 15:23:09.339839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.184 ms 00:23:10.888 [2024-10-01 15:23:09.339853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.345048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.345092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:10.888 [2024-10-01 15:23:09.345114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.155 ms 00:23:10.888 [2024-10-01 15:23:09.345127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.347053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.347104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:10.888 [2024-10-01 15:23:09.347117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:23:10.888 [2024-10-01 15:23:09.347129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.351952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.352001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:10.888 [2024-10-01 15:23:09.352016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.797 ms 00:23:10.888 [2024-10-01 15:23:09.352043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.352203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.352221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:10.888 [2024-10-01 15:23:09.352233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:10.888 [2024-10-01 15:23:09.352247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.354388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.354429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:10.888 [2024-10-01 15:23:09.354442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:23:10.888 [2024-10-01 15:23:09.354458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.355941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.356112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:10.888 [2024-10-01 15:23:09.356133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:23:10.888 [2024-10-01 15:23:09.356147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.357205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.357246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:10.888 [2024-10-01 15:23:09.357258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:23:10.888 [2024-10-01 15:23:09.357270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.358380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.888 [2024-10-01 15:23:09.358421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:10.888 [2024-10-01 15:23:09.358432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:23:10.888 [2024-10-01 15:23:09.358445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.888 [2024-10-01 15:23:09.358475] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:10.888 [2024-10-01 15:23:09.358495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:10.888 [2024-10-01 15:23:09.358874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.358885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.358898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.358910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.358923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.358933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.358951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.358961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.358974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.358985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.358999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.359993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:10.889 [2024-10-01 15:23:09.360770] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:10.889 [2024-10-01 15:23:09.360781] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49acedf4-6ec5-40c7-ad30-b97ab3bf08c2 00:23:10.889 [2024-10-01 15:23:09.360795] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:10.889 [2024-10-01 15:23:09.360808] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:10.889 [2024-10-01 15:23:09.360821] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:10.889 [2024-10-01 15:23:09.360831] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:10.889 [2024-10-01 15:23:09.360844] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:10.889 [2024-10-01 15:23:09.360854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:10.889 [2024-10-01 15:23:09.360868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:10.889 [2024-10-01 15:23:09.360877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:10.889 [2024-10-01 15:23:09.360889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:10.889 [2024-10-01 15:23:09.360900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.889 [2024-10-01 15:23:09.360915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:10.889 [2024-10-01 15:23:09.360927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.430 ms 00:23:10.889 [2024-10-01 15:23:09.360940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.889 [2024-10-01 15:23:09.362837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.889 [2024-10-01 15:23:09.362878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:10.889 [2024-10-01 15:23:09.362890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.874 ms 00:23:10.889 [2024-10-01 15:23:09.362902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.889 [2024-10-01 15:23:09.363024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.889 [2024-10-01 15:23:09.363046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:10.889 [2024-10-01 15:23:09.363058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:10.889 [2024-10-01 15:23:09.363071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.889 [2024-10-01 15:23:09.370477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.889 [2024-10-01 15:23:09.370624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:10.890 [2024-10-01 15:23:09.370716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.370763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.370849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.370885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:10.890 [2024-10-01 15:23:09.370916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.370949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.371056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.371077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:10.890 [2024-10-01 15:23:09.371088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.371101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.371121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.371134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:10.890 [2024-10-01 15:23:09.371145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.371157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.385063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.385124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:10.890 [2024-10-01 15:23:09.385139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.385152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.394635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.394694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:10.890 [2024-10-01 15:23:09.394708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.394722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.394810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.394833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:10.890 [2024-10-01 15:23:09.394843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.394857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.394897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.394911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:10.890 [2024-10-01 15:23:09.394923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.394935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.395016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.395036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:10.890 [2024-10-01 15:23:09.395047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.395059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.395096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.395119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:10.890 [2024-10-01 15:23:09.395130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.395143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.395203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.395228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:10.890 [2024-10-01 15:23:09.395241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.395254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.395302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:10.890 [2024-10-01 15:23:09.395317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:10.890 [2024-10-01 15:23:09.395328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:10.890 [2024-10-01 15:23:09.395341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-10-01 15:23:09.395509] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.906 ms, result 0 00:23:10.890 true 00:23:10.890 15:23:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89167 00:23:10.890 15:23:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89167 00:23:10.890 15:23:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:11.149 [2024-10-01 15:23:09.534278] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:23:11.150 [2024-10-01 15:23:09.534669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89958 ] 00:23:11.408 [2024-10-01 15:23:09.709153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.408 [2024-10-01 15:23:09.758093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.874  Copying: 188/1024 [MB] (188 MBps) Copying: 380/1024 [MB] (192 MBps) Copying: 574/1024 [MB] (194 MBps) Copying: 769/1024 [MB] (195 MBps) Copying: 959/1024 [MB] (190 MBps) Copying: 1024/1024 [MB] (average 191 MBps) 00:23:16.874 00:23:16.874 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89167 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:16.874 15:23:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:17.132 [2024-10-01 15:23:15.506064] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:23:17.132 [2024-10-01 15:23:15.506220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90025 ] 00:23:17.132 [2024-10-01 15:23:15.676151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.391 [2024-10-01 15:23:15.722869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.391 [2024-10-01 15:23:15.826367] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:17.391 [2024-10-01 15:23:15.826447] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:17.391 [2024-10-01 15:23:15.892051] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:17.391 [2024-10-01 15:23:15.892413] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:17.391 [2024-10-01 15:23:15.892732] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:17.677 [2024-10-01 15:23:16.198284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.677 [2024-10-01 15:23:16.198504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:17.677 [2024-10-01 15:23:16.198530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:17.677 [2024-10-01 15:23:16.198552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.677 [2024-10-01 15:23:16.198618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.677 [2024-10-01 15:23:16.198630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:17.677 [2024-10-01 15:23:16.198645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:17.677 [2024-10-01 15:23:16.198656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.677 [2024-10-01 15:23:16.198682] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:17.677 [2024-10-01 15:23:16.198917] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:17.677 [2024-10-01 15:23:16.198937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.677 [2024-10-01 15:23:16.198948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:17.677 [2024-10-01 15:23:16.198959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:23:17.677 [2024-10-01 15:23:16.198969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.677 [2024-10-01 15:23:16.200406] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:17.677 [2024-10-01 15:23:16.202873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.678 [2024-10-01 15:23:16.202913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:17.678 [2024-10-01 15:23:16.202927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.473 ms 00:23:17.678 [2024-10-01 15:23:16.202941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.678 [2024-10-01 15:23:16.203001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.678 [2024-10-01 15:23:16.203013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:17.678 [2024-10-01 15:23:16.203025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:17.678 [2024-10-01 15:23:16.203035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.678 [2024-10-01 15:23:16.209717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.678 [2024-10-01 15:23:16.209868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:17.678 [2024-10-01 15:23:16.209888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.624 ms 00:23:17.678 [2024-10-01 15:23:16.209899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.678 [2024-10-01 15:23:16.210007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.678 [2024-10-01 15:23:16.210020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:17.678 [2024-10-01 15:23:16.210031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:17.678 [2024-10-01 15:23:16.210042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.678 [2024-10-01 15:23:16.210096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.678 [2024-10-01 15:23:16.210112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:17.678 [2024-10-01 15:23:16.210123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:17.678 [2024-10-01 15:23:16.210134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.678 [2024-10-01 15:23:16.210161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:17.678 [2024-10-01 15:23:16.211825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.678 [2024-10-01 15:23:16.211853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:17.678 [2024-10-01 15:23:16.211865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:23:17.678 [2024-10-01 15:23:16.211875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.678 [2024-10-01 15:23:16.211905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.678 [2024-10-01 15:23:16.211924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:17.678 [2024-10-01 15:23:16.211935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:17.678 [2024-10-01 15:23:16.211945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.678 [2024-10-01 15:23:16.211968] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:17.678 [2024-10-01 15:23:16.211991] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:17.678 [2024-10-01 15:23:16.212026] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:17.678 [2024-10-01 15:23:16.212044] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:17.678 [2024-10-01 15:23:16.212149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:17.678 [2024-10-01 15:23:16.212163] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:17.678 [2024-10-01 15:23:16.212192] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:17.678 [2024-10-01 15:23:16.212205] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:17.678 [2024-10-01 15:23:16.212218] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:17.678 [2024-10-01 15:23:16.212229] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:17.678 [2024-10-01 15:23:16.212239] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:17.678 [2024-10-01 15:23:16.212250] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:17.678 [2024-10-01 15:23:16.212260] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:17.678 [2024-10-01 15:23:16.212273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.678 [2024-10-01 15:23:16.212287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:17.678 [2024-10-01 15:23:16.212304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:23:17.678 [2024-10-01 15:23:16.212314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.678 [2024-10-01 15:23:16.212386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.678 [2024-10-01 15:23:16.212397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:17.678 [2024-10-01 15:23:16.212407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:17.678 [2024-10-01 15:23:16.212428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.678 [2024-10-01 15:23:16.212516] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:17.678 [2024-10-01 15:23:16.212529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:17.678 [2024-10-01 15:23:16.212549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:17.678 [2024-10-01 15:23:16.212560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.678 [2024-10-01 15:23:16.212572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:17.678 [2024-10-01 15:23:16.212581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:17.678 [2024-10-01 15:23:16.212591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:17.678 [2024-10-01 15:23:16.212602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:17.678 [2024-10-01 15:23:16.212618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:17.678 [2024-10-01 15:23:16.212628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:17.678 [2024-10-01 15:23:16.212637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:17.678 [2024-10-01 15:23:16.212647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:17.678 [2024-10-01 15:23:16.212656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:17.678 [2024-10-01 15:23:16.212666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:17.678 [2024-10-01 15:23:16.212675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:17.678 [2024-10-01 15:23:16.212685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.678 [2024-10-01 15:23:16.212695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:17.678 [2024-10-01 15:23:16.212705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:17.678 [2024-10-01 15:23:16.212714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.678 [2024-10-01 15:23:16.212723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:17.678 [2024-10-01 15:23:16.212733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:17.678 [2024-10-01 15:23:16.212742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.678 [2024-10-01 15:23:16.212751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:17.678 [2024-10-01 15:23:16.212761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:17.678 [2024-10-01 15:23:16.212780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.678 [2024-10-01 15:23:16.212791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:17.678 [2024-10-01 15:23:16.212800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:17.678 [2024-10-01 15:23:16.212809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.678 [2024-10-01 15:23:16.212818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:17.678 [2024-10-01 15:23:16.212827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:17.678 [2024-10-01 15:23:16.212836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.678 [2024-10-01 15:23:16.212845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:17.678 [2024-10-01 15:23:16.212854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:17.678 [2024-10-01 15:23:16.212863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:17.679 [2024-10-01 15:23:16.212872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:17.679 [2024-10-01 15:23:16.212881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:17.679 [2024-10-01 15:23:16.212890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:17.679 [2024-10-01 15:23:16.212899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:17.679 [2024-10-01 15:23:16.212909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:17.679 [2024-10-01 15:23:16.212917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.679 [2024-10-01 15:23:16.212929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:17.679 [2024-10-01 15:23:16.212939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:17.679 [2024-10-01 15:23:16.212948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.679 [2024-10-01 15:23:16.212958] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:17.679 [2024-10-01 15:23:16.212968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:17.679 [2024-10-01 15:23:16.212978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:17.679 [2024-10-01 15:23:16.212987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.679 [2024-10-01 15:23:16.212998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:17.679 [2024-10-01 15:23:16.213008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:17.679 [2024-10-01 15:23:16.213017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:17.679 [2024-10-01 15:23:16.213026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:17.679 [2024-10-01 15:23:16.213035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:17.679 [2024-10-01 15:23:16.213044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:17.679 [2024-10-01 15:23:16.213055] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:17.679 [2024-10-01 15:23:16.213067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:17.679 [2024-10-01 15:23:16.213078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:17.679 [2024-10-01 15:23:16.213092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:17.679 [2024-10-01 15:23:16.213104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:17.679 [2024-10-01 15:23:16.213114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:17.679 [2024-10-01 15:23:16.213125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:17.679 [2024-10-01 15:23:16.213135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:17.679 [2024-10-01 15:23:16.213145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:17.679 [2024-10-01 15:23:16.213155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:17.679 [2024-10-01 15:23:16.213166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:17.679 [2024-10-01 15:23:16.213442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:17.679 [2024-10-01 15:23:16.213493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:17.679 [2024-10-01 15:23:16.213540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:17.679 [2024-10-01 15:23:16.213585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:17.679 [2024-10-01 15:23:16.213632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:17.679 [2024-10-01 15:23:16.213729] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:17.679 [2024-10-01 15:23:16.213781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:17.679 [2024-10-01 15:23:16.213829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:17.679 [2024-10-01 15:23:16.213882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:17.679 [2024-10-01 15:23:16.213978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:17.679 [2024-10-01 15:23:16.214031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:17.679 [2024-10-01 15:23:16.214081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.679 [2024-10-01 15:23:16.214118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:17.679 [2024-10-01 15:23:16.214204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:23:17.679 [2024-10-01 15:23:16.214241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.237481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.237703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:17.938 [2024-10-01 15:23:16.237950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.175 ms 00:23:17.938 [2024-10-01 15:23:16.238150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.238329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.238498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:17.938 [2024-10-01 15:23:16.238785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:17.938 [2024-10-01 15:23:16.238930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.249857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.250009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:17.938 [2024-10-01 15:23:16.250032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.843 ms 00:23:17.938 [2024-10-01 15:23:16.250054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.250100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.250115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:17.938 [2024-10-01 15:23:16.250127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:17.938 [2024-10-01 15:23:16.250137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.250627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.250642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:17.938 [2024-10-01 15:23:16.250654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:23:17.938 [2024-10-01 15:23:16.250664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.250794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.250808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:17.938 [2024-10-01 15:23:16.250819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:23:17.938 [2024-10-01 15:23:16.250845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.256921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.257061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:17.938 [2024-10-01 15:23:16.257096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.063 ms 00:23:17.938 [2024-10-01 15:23:16.257111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.259652] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:17.938 [2024-10-01 15:23:16.259689] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:17.938 [2024-10-01 15:23:16.259704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.259715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:17.938 [2024-10-01 15:23:16.259726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.477 ms 00:23:17.938 [2024-10-01 15:23:16.259736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.273286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.273337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:17.938 [2024-10-01 15:23:16.273352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.530 ms 00:23:17.938 [2024-10-01 15:23:16.273363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.275842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.275990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:17.938 [2024-10-01 15:23:16.276010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.430 ms 00:23:17.938 [2024-10-01 15:23:16.276020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.277496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.277537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:17.938 [2024-10-01 15:23:16.277549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:23:17.938 [2024-10-01 15:23:16.277559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.277904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.277926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:17.938 [2024-10-01 15:23:16.277942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:23:17.938 [2024-10-01 15:23:16.277953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.938 [2024-10-01 15:23:16.298827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.938 [2024-10-01 15:23:16.298901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:17.938 [2024-10-01 15:23:16.298920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.886 ms 00:23:17.938 [2024-10-01 15:23:16.298932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.939 [2024-10-01 15:23:16.305919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:17.939 [2024-10-01 15:23:16.309342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.939 [2024-10-01 15:23:16.309377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:17.939 [2024-10-01 15:23:16.309392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.349 ms 00:23:17.939 [2024-10-01 15:23:16.309403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.939 [2024-10-01 15:23:16.309506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.939 [2024-10-01 15:23:16.309529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:17.939 [2024-10-01 15:23:16.309541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:17.939 [2024-10-01 15:23:16.309559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.939 [2024-10-01 15:23:16.309643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.939 [2024-10-01 15:23:16.309663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:17.939 [2024-10-01 15:23:16.309675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:17.939 [2024-10-01 15:23:16.309684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.939 [2024-10-01 15:23:16.309708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.939 [2024-10-01 15:23:16.309723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:17.939 [2024-10-01 15:23:16.309734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:17.939 [2024-10-01 15:23:16.309744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.939 [2024-10-01 15:23:16.309778] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:17.939 [2024-10-01 15:23:16.309799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.939 [2024-10-01 15:23:16.309809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:17.939 [2024-10-01 15:23:16.309820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:17.939 [2024-10-01 15:23:16.309830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.939 [2024-10-01 15:23:16.313677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.939 [2024-10-01 15:23:16.313714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:17.939 [2024-10-01 15:23:16.313727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.830 ms 00:23:17.939 [2024-10-01 15:23:16.313743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.939 [2024-10-01 15:23:16.313817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.939 [2024-10-01 15:23:16.313833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:17.939 [2024-10-01 15:23:16.313844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:17.939 [2024-10-01 15:23:16.313854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.939 [2024-10-01 15:23:16.314990] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.468 ms, result 0 00:23:56.156  Copying: 29/1024 [MB] (29 MBps) Copying: 62/1024 [MB] (32 MBps) Copying: 90/1024 [MB] (28 MBps) Copying: 120/1024 [MB] (29 MBps) Copying: 149/1024 [MB] (28 MBps) Copying: 176/1024 [MB] (27 MBps) Copying: 203/1024 [MB] (26 MBps) Copying: 230/1024 [MB] (27 MBps) Copying: 257/1024 [MB] (27 MBps) Copying: 283/1024 [MB] (26 MBps) Copying: 311/1024 [MB] (27 MBps) Copying: 339/1024 [MB] (28 MBps) Copying: 367/1024 [MB] (27 MBps) Copying: 393/1024 [MB] (26 MBps) Copying: 420/1024 [MB] (26 MBps) Copying: 447/1024 [MB] (27 MBps) Copying: 473/1024 [MB] (25 MBps) Copying: 499/1024 [MB] (25 MBps) Copying: 524/1024 [MB] (25 MBps) Copying: 550/1024 [MB] (25 MBps) Copying: 575/1024 [MB] (25 MBps) Copying: 601/1024 [MB] (25 MBps) Copying: 627/1024 [MB] (26 MBps) Copying: 654/1024 [MB] (26 MBps) Copying: 680/1024 [MB] (26 MBps) Copying: 706/1024 [MB] (25 MBps) Copying: 733/1024 [MB] (27 MBps) Copying: 761/1024 [MB] (27 MBps) Copying: 789/1024 [MB] (27 MBps) Copying: 816/1024 [MB] (27 MBps) Copying: 844/1024 [MB] (27 MBps) Copying: 870/1024 [MB] (26 MBps) Copying: 898/1024 [MB] (28 MBps) Copying: 926/1024 [MB] (28 MBps) Copying: 954/1024 [MB] (28 MBps) Copying: 983/1024 [MB] (28 MBps) Copying: 1010/1024 [MB] (26 MBps) Copying: 1023/1024 [MB] (13 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-01 15:23:54.565120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.156 [2024-10-01 15:23:54.565383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:56.156 [2024-10-01 15:23:54.565498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:56.156 [2024-10-01 15:23:54.565558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.156 [2024-10-01 15:23:54.568028] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:56.157 [2024-10-01 15:23:54.571901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.157 [2024-10-01 15:23:54.572038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:56.157 [2024-10-01 15:23:54.572145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.619 ms 00:23:56.157 [2024-10-01 15:23:54.572319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.157 [2024-10-01 15:23:54.580546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.157 [2024-10-01 15:23:54.580710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:56.157 [2024-10-01 15:23:54.580738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.456 ms 00:23:56.157 [2024-10-01 15:23:54.580758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.157 [2024-10-01 15:23:54.605124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.157 [2024-10-01 15:23:54.605197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:56.157 [2024-10-01 15:23:54.605220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.351 ms 00:23:56.157 [2024-10-01 15:23:54.605232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.157 [2024-10-01 15:23:54.610304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.157 [2024-10-01 15:23:54.610343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:56.157 [2024-10-01 15:23:54.610357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.044 ms 00:23:56.157 [2024-10-01 15:23:54.610368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.157 [2024-10-01 15:23:54.612019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.157 [2024-10-01 15:23:54.612157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:56.157 [2024-10-01 15:23:54.612191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.591 ms 00:23:56.157 [2024-10-01 15:23:54.612202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.157 [2024-10-01 15:23:54.615817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.157 [2024-10-01 15:23:54.615853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:56.157 [2024-10-01 15:23:54.615873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:23:56.157 [2024-10-01 15:23:54.615883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.416 [2024-10-01 15:23:54.728385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.416 [2024-10-01 15:23:54.728471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:56.416 [2024-10-01 15:23:54.728488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.646 ms 00:23:56.416 [2024-10-01 15:23:54.728500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.416 [2024-10-01 15:23:54.730716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.416 [2024-10-01 15:23:54.730872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:56.416 [2024-10-01 15:23:54.730894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:23:56.416 [2024-10-01 15:23:54.730904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.416 [2024-10-01 15:23:54.732299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.416 [2024-10-01 15:23:54.732334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:56.416 [2024-10-01 15:23:54.732347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.359 ms 00:23:56.416 [2024-10-01 15:23:54.732357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.416 [2024-10-01 15:23:54.733570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.416 [2024-10-01 15:23:54.733598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:56.416 [2024-10-01 15:23:54.733610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:23:56.416 [2024-10-01 15:23:54.733620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.416 [2024-10-01 15:23:54.734676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.416 [2024-10-01 15:23:54.734710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:56.416 [2024-10-01 15:23:54.734722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:23:56.416 [2024-10-01 15:23:54.734731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.416 [2024-10-01 15:23:54.734757] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:56.416 [2024-10-01 15:23:54.734775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 109824 / 261120 wr_cnt: 1 state: open 00:23:56.416 [2024-10-01 15:23:54.734788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:56.416 [2024-10-01 15:23:54.734799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:56.416 [2024-10-01 15:23:54.734810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:56.416 [2024-10-01 15:23:54.734821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:56.416 [2024-10-01 15:23:54.734832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:56.416 [2024-10-01 15:23:54.734843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:56.416 [2024-10-01 15:23:54.734854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:56.416 [2024-10-01 15:23:54.734864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:56.416 [2024-10-01 15:23:54.734874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.734990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:56.417 [2024-10-01 15:23:54.735732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:56.418 [2024-10-01 15:23:54.735866] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:56.418 [2024-10-01 15:23:54.735882] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49acedf4-6ec5-40c7-ad30-b97ab3bf08c2 00:23:56.418 [2024-10-01 15:23:54.735896] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 109824 00:23:56.418 [2024-10-01 15:23:54.735906] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 110784 00:23:56.418 [2024-10-01 15:23:54.735916] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 109824 00:23:56.418 [2024-10-01 15:23:54.735926] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:23:56.418 [2024-10-01 15:23:54.735936] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:56.418 [2024-10-01 15:23:54.735946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:56.418 [2024-10-01 15:23:54.735956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:56.418 [2024-10-01 15:23:54.735965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:56.418 [2024-10-01 15:23:54.735975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:56.418 [2024-10-01 15:23:54.735997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.418 [2024-10-01 15:23:54.736007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:56.418 [2024-10-01 15:23:54.736017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:23:56.418 [2024-10-01 15:23:54.736027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.737770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.418 [2024-10-01 15:23:54.737891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:56.418 [2024-10-01 15:23:54.737910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.724 ms 00:23:56.418 [2024-10-01 15:23:54.737921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.738044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.418 [2024-10-01 15:23:54.738063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:56.418 [2024-10-01 15:23:54.738078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:56.418 [2024-10-01 15:23:54.738088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.744143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.744177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:56.418 [2024-10-01 15:23:54.744189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.744200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.744256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.744267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:56.418 [2024-10-01 15:23:54.744293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.744303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.744398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.744412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:56.418 [2024-10-01 15:23:54.744423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.744433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.744450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.744460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:56.418 [2024-10-01 15:23:54.744470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.744485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.757929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.757978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:56.418 [2024-10-01 15:23:54.758005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.758015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.766277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.766320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:56.418 [2024-10-01 15:23:54.766340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.766350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.766411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.766423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:56.418 [2024-10-01 15:23:54.766433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.766444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.766469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.766480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:56.418 [2024-10-01 15:23:54.766491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.766509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.766588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.766601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:56.418 [2024-10-01 15:23:54.766611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.766622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.766658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.766671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:56.418 [2024-10-01 15:23:54.766681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.766699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.766743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.766755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:56.418 [2024-10-01 15:23:54.766765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.766782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.766826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.418 [2024-10-01 15:23:54.766838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:56.418 [2024-10-01 15:23:54.766849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.418 [2024-10-01 15:23:54.766860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.418 [2024-10-01 15:23:54.766984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 203.816 ms, result 0 00:23:57.350 00:23:57.350 00:23:57.350 15:23:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:59.297 15:23:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:59.297 [2024-10-01 15:23:57.457348] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:23:59.297 [2024-10-01 15:23:57.457926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90449 ] 00:23:59.297 [2024-10-01 15:23:57.627740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.297 [2024-10-01 15:23:57.679464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.297 [2024-10-01 15:23:57.782538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:59.297 [2024-10-01 15:23:57.782615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:59.557 [2024-10-01 15:23:57.942186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.557 [2024-10-01 15:23:57.942236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:59.557 [2024-10-01 15:23:57.942262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.557 [2024-10-01 15:23:57.942281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.557 [2024-10-01 15:23:57.942340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.557 [2024-10-01 15:23:57.942358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:59.557 [2024-10-01 15:23:57.942376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:59.557 [2024-10-01 15:23:57.942394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.557 [2024-10-01 15:23:57.942416] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:59.557 [2024-10-01 15:23:57.942680] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:59.557 [2024-10-01 15:23:57.942700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.557 [2024-10-01 15:23:57.942715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:59.557 [2024-10-01 15:23:57.942726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:23:59.557 [2024-10-01 15:23:57.942738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.557 [2024-10-01 15:23:57.944208] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:59.557 [2024-10-01 15:23:57.946882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.557 [2024-10-01 15:23:57.946911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:59.557 [2024-10-01 15:23:57.946932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:23:59.557 [2024-10-01 15:23:57.946942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.557 [2024-10-01 15:23:57.947009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.557 [2024-10-01 15:23:57.947025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:59.557 [2024-10-01 15:23:57.947037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:59.557 [2024-10-01 15:23:57.947049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.557 [2024-10-01 15:23:57.953818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.557 [2024-10-01 15:23:57.953959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:59.557 [2024-10-01 15:23:57.954092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.729 ms 00:23:59.557 [2024-10-01 15:23:57.954130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.557 [2024-10-01 15:23:57.954300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.557 [2024-10-01 15:23:57.954349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:59.557 [2024-10-01 15:23:57.954437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:59.557 [2024-10-01 15:23:57.954473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.557 [2024-10-01 15:23:57.954568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.557 [2024-10-01 15:23:57.954604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:59.557 [2024-10-01 15:23:57.954648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:59.557 [2024-10-01 15:23:57.954728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.558 [2024-10-01 15:23:57.954789] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:59.558 [2024-10-01 15:23:57.956484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.558 [2024-10-01 15:23:57.956603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:59.558 [2024-10-01 15:23:57.956671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.707 ms 00:23:59.558 [2024-10-01 15:23:57.956705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.558 [2024-10-01 15:23:57.956767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.558 [2024-10-01 15:23:57.956810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:59.558 [2024-10-01 15:23:57.956841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:59.558 [2024-10-01 15:23:57.956871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.558 [2024-10-01 15:23:57.956960] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:59.558 [2024-10-01 15:23:57.957014] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:59.558 [2024-10-01 15:23:57.957098] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:59.558 [2024-10-01 15:23:57.957157] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:59.558 [2024-10-01 15:23:57.957352] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:59.558 [2024-10-01 15:23:57.957406] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:59.558 [2024-10-01 15:23:57.957500] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:59.558 [2024-10-01 15:23:57.957518] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:59.558 [2024-10-01 15:23:57.957536] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:59.558 [2024-10-01 15:23:57.957548] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:59.558 [2024-10-01 15:23:57.957558] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:59.558 [2024-10-01 15:23:57.957568] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:59.558 [2024-10-01 15:23:57.957578] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:59.558 [2024-10-01 15:23:57.957589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.558 [2024-10-01 15:23:57.957599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:59.558 [2024-10-01 15:23:57.957617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:23:59.558 [2024-10-01 15:23:57.957629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.558 [2024-10-01 15:23:57.957721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.558 [2024-10-01 15:23:57.957733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:59.558 [2024-10-01 15:23:57.957747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:59.558 [2024-10-01 15:23:57.957761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.558 [2024-10-01 15:23:57.957866] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:59.558 [2024-10-01 15:23:57.957880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:59.558 [2024-10-01 15:23:57.957891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.558 [2024-10-01 15:23:57.957912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.558 [2024-10-01 15:23:57.957923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:59.558 [2024-10-01 15:23:57.957932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:59.558 [2024-10-01 15:23:57.957942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:59.558 [2024-10-01 15:23:57.957955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:59.558 [2024-10-01 15:23:57.957965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:59.558 [2024-10-01 15:23:57.957977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.558 [2024-10-01 15:23:57.957987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:59.558 [2024-10-01 15:23:57.957999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:59.558 [2024-10-01 15:23:57.958015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.558 [2024-10-01 15:23:57.958027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:59.558 [2024-10-01 15:23:57.958040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:59.558 [2024-10-01 15:23:57.958052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.558 [2024-10-01 15:23:57.958061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:59.558 [2024-10-01 15:23:57.958071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:59.558 [2024-10-01 15:23:57.958084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.558 [2024-10-01 15:23:57.958093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:59.558 [2024-10-01 15:23:57.958107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:59.558 [2024-10-01 15:23:57.958116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.558 [2024-10-01 15:23:57.958126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:59.558 [2024-10-01 15:23:57.958135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:59.558 [2024-10-01 15:23:57.958149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.558 [2024-10-01 15:23:57.958158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:59.558 [2024-10-01 15:23:57.958167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:59.558 [2024-10-01 15:23:57.958190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.558 [2024-10-01 15:23:57.958203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:59.558 [2024-10-01 15:23:57.958213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:59.558 [2024-10-01 15:23:57.958222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.558 [2024-10-01 15:23:57.958231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:59.558 [2024-10-01 15:23:57.958240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:59.558 [2024-10-01 15:23:57.958249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.558 [2024-10-01 15:23:57.958258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:59.558 [2024-10-01 15:23:57.958267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:59.558 [2024-10-01 15:23:57.958277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.558 [2024-10-01 15:23:57.958286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:59.558 [2024-10-01 15:23:57.958295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:59.558 [2024-10-01 15:23:57.958305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.558 [2024-10-01 15:23:57.958314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:59.558 [2024-10-01 15:23:57.958323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:59.558 [2024-10-01 15:23:57.958331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.558 [2024-10-01 15:23:57.958340] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:59.558 [2024-10-01 15:23:57.958364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:59.558 [2024-10-01 15:23:57.958374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.558 [2024-10-01 15:23:57.958387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.558 [2024-10-01 15:23:57.958405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:59.558 [2024-10-01 15:23:57.958414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:59.558 [2024-10-01 15:23:57.958424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:59.558 [2024-10-01 15:23:57.958433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:59.558 [2024-10-01 15:23:57.958442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:59.558 [2024-10-01 15:23:57.958451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:59.558 [2024-10-01 15:23:57.958462] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:59.558 [2024-10-01 15:23:57.958475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.558 [2024-10-01 15:23:57.958487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:59.558 [2024-10-01 15:23:57.958498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:59.558 [2024-10-01 15:23:57.958509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:59.558 [2024-10-01 15:23:57.958520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:59.558 [2024-10-01 15:23:57.958531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:59.558 [2024-10-01 15:23:57.958545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:59.558 [2024-10-01 15:23:57.958556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:59.558 [2024-10-01 15:23:57.958566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:59.558 [2024-10-01 15:23:57.958577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:59.558 [2024-10-01 15:23:57.958587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:59.558 [2024-10-01 15:23:57.958597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:59.558 [2024-10-01 15:23:57.958608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:59.558 [2024-10-01 15:23:57.958618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:59.558 [2024-10-01 15:23:57.958628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:59.558 [2024-10-01 15:23:57.958639] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:59.559 [2024-10-01 15:23:57.958651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.559 [2024-10-01 15:23:57.958662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:59.559 [2024-10-01 15:23:57.958672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:59.559 [2024-10-01 15:23:57.958682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:59.559 [2024-10-01 15:23:57.958692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:59.559 [2024-10-01 15:23:57.958703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:57.958717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:59.559 [2024-10-01 15:23:57.958727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:23:59.559 [2024-10-01 15:23:57.958737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:57.984487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:57.984543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.559 [2024-10-01 15:23:57.984568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.727 ms 00:23:59.559 [2024-10-01 15:23:57.984583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:57.984699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:57.984713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.559 [2024-10-01 15:23:57.984728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:59.559 [2024-10-01 15:23:57.984752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:57.995990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:57.996162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.559 [2024-10-01 15:23:57.996200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.165 ms 00:23:59.559 [2024-10-01 15:23:57.996212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:57.996266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:57.996279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.559 [2024-10-01 15:23:57.996293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:59.559 [2024-10-01 15:23:57.996314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:57.996803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:57.996844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.559 [2024-10-01 15:23:57.996857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:23:59.559 [2024-10-01 15:23:57.996869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:57.996996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:57.997015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.559 [2024-10-01 15:23:57.997027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:59.559 [2024-10-01 15:23:57.997038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.003074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.003232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.559 [2024-10-01 15:23:58.003268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.022 ms 00:23:59.559 [2024-10-01 15:23:58.003279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.005960] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:59.559 [2024-10-01 15:23:58.005998] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:59.559 [2024-10-01 15:23:58.006020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.006031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:59.559 [2024-10-01 15:23:58.006042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:23:59.559 [2024-10-01 15:23:58.006053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.019579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.019634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:59.559 [2024-10-01 15:23:58.019661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.506 ms 00:23:59.559 [2024-10-01 15:23:58.019672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.021628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.021760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:59.559 [2024-10-01 15:23:58.021779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.907 ms 00:23:59.559 [2024-10-01 15:23:58.021789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.023270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.023302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:59.559 [2024-10-01 15:23:58.023313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:23:59.559 [2024-10-01 15:23:58.023323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.023655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.023673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.559 [2024-10-01 15:23:58.023694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:23:59.559 [2024-10-01 15:23:58.023704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.043784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.043856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:59.559 [2024-10-01 15:23:58.043872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.089 ms 00:23:59.559 [2024-10-01 15:23:58.043883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.050591] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:59.559 [2024-10-01 15:23:58.054035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.054070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.559 [2024-10-01 15:23:58.054091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.108 ms 00:23:59.559 [2024-10-01 15:23:58.054101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.054216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.054238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:59.559 [2024-10-01 15:23:58.054249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:59.559 [2024-10-01 15:23:58.054259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.055933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.056083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.559 [2024-10-01 15:23:58.056103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:23:59.559 [2024-10-01 15:23:58.056119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.056159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.056179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.559 [2024-10-01 15:23:58.056191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:59.559 [2024-10-01 15:23:58.056201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.056239] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:59.559 [2024-10-01 15:23:58.056252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.056262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:59.559 [2024-10-01 15:23:58.056272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:59.559 [2024-10-01 15:23:58.056282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.060061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.060097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.559 [2024-10-01 15:23:58.060109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.762 ms 00:23:59.559 [2024-10-01 15:23:58.060120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.060203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.559 [2024-10-01 15:23:58.060216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.559 [2024-10-01 15:23:58.060227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:59.559 [2024-10-01 15:23:58.060238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.559 [2024-10-01 15:23:58.061309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.876 ms, result 0 00:24:30.461  Copying: 1160/1048576 [kB] (1160 kBps) Copying: 10684/1048576 [kB] (9524 kBps) Copying: 47/1024 [MB] (36 MBps) Copying: 84/1024 [MB] (37 MBps) Copying: 120/1024 [MB] (36 MBps) Copying: 156/1024 [MB] (36 MBps) Copying: 193/1024 [MB] (36 MBps) Copying: 230/1024 [MB] (37 MBps) Copying: 267/1024 [MB] (36 MBps) Copying: 302/1024 [MB] (35 MBps) Copying: 339/1024 [MB] (36 MBps) Copying: 375/1024 [MB] (36 MBps) Copying: 411/1024 [MB] (35 MBps) Copying: 449/1024 [MB] (37 MBps) Copying: 486/1024 [MB] (36 MBps) Copying: 520/1024 [MB] (34 MBps) Copying: 556/1024 [MB] (35 MBps) Copying: 591/1024 [MB] (35 MBps) Copying: 625/1024 [MB] (33 MBps) Copying: 660/1024 [MB] (34 MBps) Copying: 697/1024 [MB] (37 MBps) Copying: 734/1024 [MB] (36 MBps) Copying: 770/1024 [MB] (36 MBps) Copying: 805/1024 [MB] (34 MBps) Copying: 840/1024 [MB] (35 MBps) Copying: 876/1024 [MB] (35 MBps) Copying: 913/1024 [MB] (36 MBps) Copying: 950/1024 [MB] (37 MBps) Copying: 986/1024 [MB] (36 MBps) Copying: 1022/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 34 MBps)[2024-10-01 15:24:28.939148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.461 [2024-10-01 15:24:28.939278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:30.461 [2024-10-01 15:24:28.939310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:30.461 [2024-10-01 15:24:28.939331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.461 [2024-10-01 15:24:28.939374] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:30.461 [2024-10-01 15:24:28.940481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.461 [2024-10-01 15:24:28.940574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:30.461 [2024-10-01 15:24:28.940651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:24:30.461 [2024-10-01 15:24:28.940804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.461 [2024-10-01 15:24:28.941561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.461 [2024-10-01 15:24:28.941758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:30.461 [2024-10-01 15:24:28.941899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:24:30.461 [2024-10-01 15:24:28.942068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.461 [2024-10-01 15:24:28.956867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.461 [2024-10-01 15:24:28.956931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:30.461 [2024-10-01 15:24:28.956959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.422 ms 00:24:30.461 [2024-10-01 15:24:28.956971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.461 [2024-10-01 15:24:28.962205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.461 [2024-10-01 15:24:28.962243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:30.462 [2024-10-01 15:24:28.962257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.198 ms 00:24:30.462 [2024-10-01 15:24:28.962267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.462 [2024-10-01 15:24:28.964110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.462 [2024-10-01 15:24:28.964150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:30.462 [2024-10-01 15:24:28.964163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.769 ms 00:24:30.462 [2024-10-01 15:24:28.964186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.462 [2024-10-01 15:24:28.968034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.462 [2024-10-01 15:24:28.968185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:30.462 [2024-10-01 15:24:28.968206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.821 ms 00:24:30.462 [2024-10-01 15:24:28.968224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.462 [2024-10-01 15:24:28.970218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.462 [2024-10-01 15:24:28.970253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:30.462 [2024-10-01 15:24:28.970265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.959 ms 00:24:30.462 [2024-10-01 15:24:28.970274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.462 [2024-10-01 15:24:28.972386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.462 [2024-10-01 15:24:28.972419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:30.462 [2024-10-01 15:24:28.972430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.098 ms 00:24:30.462 [2024-10-01 15:24:28.972440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.462 [2024-10-01 15:24:28.973966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.462 [2024-10-01 15:24:28.974015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:30.462 [2024-10-01 15:24:28.974027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:24:30.462 [2024-10-01 15:24:28.974036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.462 [2024-10-01 15:24:28.975302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.462 [2024-10-01 15:24:28.975336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:30.462 [2024-10-01 15:24:28.975347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:24:30.462 [2024-10-01 15:24:28.975357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.462 [2024-10-01 15:24:28.976480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.462 [2024-10-01 15:24:28.976513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:30.462 [2024-10-01 15:24:28.976525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:24:30.462 [2024-10-01 15:24:28.976534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.462 [2024-10-01 15:24:28.976560] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:30.462 [2024-10-01 15:24:28.976575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:30.462 [2024-10-01 15:24:28.976588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:30.462 [2024-10-01 15:24:28.976600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.976998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:30.462 [2024-10-01 15:24:28.977271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:30.463 [2024-10-01 15:24:28.977666] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:30.463 [2024-10-01 15:24:28.977676] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49acedf4-6ec5-40c7-ad30-b97ab3bf08c2 00:24:30.463 [2024-10-01 15:24:28.977687] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:30.463 [2024-10-01 15:24:28.977709] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 154816 00:24:30.463 [2024-10-01 15:24:28.977719] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 152832 00:24:30.463 [2024-10-01 15:24:28.977729] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0130 00:24:30.463 [2024-10-01 15:24:28.977748] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:30.463 [2024-10-01 15:24:28.977758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:30.463 [2024-10-01 15:24:28.977768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:30.463 [2024-10-01 15:24:28.977777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:30.463 [2024-10-01 15:24:28.977786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:30.463 [2024-10-01 15:24:28.977795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.463 [2024-10-01 15:24:28.977805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:30.463 [2024-10-01 15:24:28.977816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:24:30.463 [2024-10-01 15:24:28.977826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.463 [2024-10-01 15:24:28.979519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.463 [2024-10-01 15:24:28.979548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:30.463 [2024-10-01 15:24:28.979559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:24:30.463 [2024-10-01 15:24:28.979569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.463 [2024-10-01 15:24:28.979680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.463 [2024-10-01 15:24:28.979700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:30.463 [2024-10-01 15:24:28.979711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:30.463 [2024-10-01 15:24:28.979723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.463 [2024-10-01 15:24:28.985669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.463 [2024-10-01 15:24:28.985701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:30.463 [2024-10-01 15:24:28.985713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.463 [2024-10-01 15:24:28.985732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.463 [2024-10-01 15:24:28.985789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.463 [2024-10-01 15:24:28.985800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:30.463 [2024-10-01 15:24:28.985818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.463 [2024-10-01 15:24:28.985828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.463 [2024-10-01 15:24:28.985893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.463 [2024-10-01 15:24:28.985906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:30.463 [2024-10-01 15:24:28.985917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.463 [2024-10-01 15:24:28.985934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.463 [2024-10-01 15:24:28.985951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.463 [2024-10-01 15:24:28.985962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:30.463 [2024-10-01 15:24:28.985972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.463 [2024-10-01 15:24:28.985989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.463 [2024-10-01 15:24:28.998820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.463 [2024-10-01 15:24:28.998874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:30.463 [2024-10-01 15:24:28.998900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.463 [2024-10-01 15:24:28.998923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.722 [2024-10-01 15:24:29.007473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.722 [2024-10-01 15:24:29.007532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:30.722 [2024-10-01 15:24:29.007550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.722 [2024-10-01 15:24:29.007563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.722 [2024-10-01 15:24:29.007646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.722 [2024-10-01 15:24:29.007659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:30.722 [2024-10-01 15:24:29.007671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.722 [2024-10-01 15:24:29.007681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.722 [2024-10-01 15:24:29.007707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.722 [2024-10-01 15:24:29.007718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:30.722 [2024-10-01 15:24:29.007729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.722 [2024-10-01 15:24:29.007738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.722 [2024-10-01 15:24:29.007825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.722 [2024-10-01 15:24:29.007850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:30.722 [2024-10-01 15:24:29.007861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.722 [2024-10-01 15:24:29.007871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.722 [2024-10-01 15:24:29.007907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.722 [2024-10-01 15:24:29.007925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:30.722 [2024-10-01 15:24:29.007936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.722 [2024-10-01 15:24:29.007946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.722 [2024-10-01 15:24:29.007987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.722 [2024-10-01 15:24:29.008002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:30.722 [2024-10-01 15:24:29.008012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.722 [2024-10-01 15:24:29.008022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.722 [2024-10-01 15:24:29.008066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.722 [2024-10-01 15:24:29.008078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:30.722 [2024-10-01 15:24:29.008096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.722 [2024-10-01 15:24:29.008106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.722 [2024-10-01 15:24:29.008282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.213 ms, result 0 00:24:30.722 00:24:30.722 00:24:30.980 15:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:32.911 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:32.911 15:24:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:32.911 [2024-10-01 15:24:31.085361] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:24:32.911 [2024-10-01 15:24:31.085856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90797 ] 00:24:32.911 [2024-10-01 15:24:31.256849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.911 [2024-10-01 15:24:31.304861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.911 [2024-10-01 15:24:31.408359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:32.911 [2024-10-01 15:24:31.408437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.171 [2024-10-01 15:24:31.567844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.171 [2024-10-01 15:24:31.567902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:33.171 [2024-10-01 15:24:31.567921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:33.171 [2024-10-01 15:24:31.567932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.171 [2024-10-01 15:24:31.567989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.171 [2024-10-01 15:24:31.568004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:33.171 [2024-10-01 15:24:31.568014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:33.171 [2024-10-01 15:24:31.568031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.171 [2024-10-01 15:24:31.568053] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:33.171 [2024-10-01 15:24:31.568287] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:33.171 [2024-10-01 15:24:31.568310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.171 [2024-10-01 15:24:31.568320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:33.171 [2024-10-01 15:24:31.568332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:24:33.171 [2024-10-01 15:24:31.568342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.171 [2024-10-01 15:24:31.569760] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:33.172 [2024-10-01 15:24:31.572377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.172 [2024-10-01 15:24:31.572433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:33.172 [2024-10-01 15:24:31.572454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.622 ms 00:24:33.172 [2024-10-01 15:24:31.572465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.172 [2024-10-01 15:24:31.572532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.172 [2024-10-01 15:24:31.572548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:33.172 [2024-10-01 15:24:31.572562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:33.172 [2024-10-01 15:24:31.572573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.172 [2024-10-01 15:24:31.579398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.172 [2024-10-01 15:24:31.579444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:33.172 [2024-10-01 15:24:31.579457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.759 ms 00:24:33.172 [2024-10-01 15:24:31.579469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.172 [2024-10-01 15:24:31.579601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.172 [2024-10-01 15:24:31.579619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:33.172 [2024-10-01 15:24:31.579630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:33.172 [2024-10-01 15:24:31.579640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.172 [2024-10-01 15:24:31.579724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.172 [2024-10-01 15:24:31.579738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:33.172 [2024-10-01 15:24:31.579749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:33.172 [2024-10-01 15:24:31.579759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.172 [2024-10-01 15:24:31.579788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:33.172 [2024-10-01 15:24:31.581461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.172 [2024-10-01 15:24:31.581488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:33.172 [2024-10-01 15:24:31.581501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.684 ms 00:24:33.172 [2024-10-01 15:24:31.581520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.172 [2024-10-01 15:24:31.581552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.172 [2024-10-01 15:24:31.581562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:33.172 [2024-10-01 15:24:31.581580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:33.172 [2024-10-01 15:24:31.581591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.172 [2024-10-01 15:24:31.581614] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:33.172 [2024-10-01 15:24:31.581649] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:33.172 [2024-10-01 15:24:31.581691] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:33.172 [2024-10-01 15:24:31.581715] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:33.172 [2024-10-01 15:24:31.581804] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:33.172 [2024-10-01 15:24:31.581818] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:33.172 [2024-10-01 15:24:31.581830] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:33.172 [2024-10-01 15:24:31.581851] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:33.172 [2024-10-01 15:24:31.581872] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:33.172 [2024-10-01 15:24:31.581884] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:33.172 [2024-10-01 15:24:31.581894] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:33.172 [2024-10-01 15:24:31.581904] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:33.172 [2024-10-01 15:24:31.581914] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:33.172 [2024-10-01 15:24:31.581924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.172 [2024-10-01 15:24:31.581943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:33.172 [2024-10-01 15:24:31.581954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:24:33.172 [2024-10-01 15:24:31.581964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.172 [2024-10-01 15:24:31.582039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.172 [2024-10-01 15:24:31.582056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:33.172 [2024-10-01 15:24:31.582073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:33.172 [2024-10-01 15:24:31.582083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.172 [2024-10-01 15:24:31.582182] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:33.172 [2024-10-01 15:24:31.582209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:33.172 [2024-10-01 15:24:31.582220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.172 [2024-10-01 15:24:31.582240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:33.172 [2024-10-01 15:24:31.582260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:33.172 [2024-10-01 15:24:31.582278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:33.172 [2024-10-01 15:24:31.582287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.172 [2024-10-01 15:24:31.582305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:33.172 [2024-10-01 15:24:31.582318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:33.172 [2024-10-01 15:24:31.582328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.172 [2024-10-01 15:24:31.582336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:33.172 [2024-10-01 15:24:31.582346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:33.172 [2024-10-01 15:24:31.582355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:33.172 [2024-10-01 15:24:31.582374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:33.172 [2024-10-01 15:24:31.582383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:33.172 [2024-10-01 15:24:31.582401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.172 [2024-10-01 15:24:31.582419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:33.172 [2024-10-01 15:24:31.582428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.172 [2024-10-01 15:24:31.582445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:33.172 [2024-10-01 15:24:31.582454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.172 [2024-10-01 15:24:31.582478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:33.172 [2024-10-01 15:24:31.582487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.172 [2024-10-01 15:24:31.582505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:33.172 [2024-10-01 15:24:31.582513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.172 [2024-10-01 15:24:31.582531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:33.172 [2024-10-01 15:24:31.582540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:33.172 [2024-10-01 15:24:31.582549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.172 [2024-10-01 15:24:31.582559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:33.172 [2024-10-01 15:24:31.582568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:33.172 [2024-10-01 15:24:31.582577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:33.172 [2024-10-01 15:24:31.582595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:33.172 [2024-10-01 15:24:31.582604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582616] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:33.172 [2024-10-01 15:24:31.582633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:33.172 [2024-10-01 15:24:31.582654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.172 [2024-10-01 15:24:31.582667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.172 [2024-10-01 15:24:31.582677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:33.172 [2024-10-01 15:24:31.582686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:33.172 [2024-10-01 15:24:31.582695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:33.172 [2024-10-01 15:24:31.582705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:33.172 [2024-10-01 15:24:31.582713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:33.172 [2024-10-01 15:24:31.582723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:33.172 [2024-10-01 15:24:31.582733] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:33.172 [2024-10-01 15:24:31.582745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.173 [2024-10-01 15:24:31.582756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:33.173 [2024-10-01 15:24:31.582767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:33.173 [2024-10-01 15:24:31.582776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:33.173 [2024-10-01 15:24:31.582786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:33.173 [2024-10-01 15:24:31.582800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:33.173 [2024-10-01 15:24:31.582810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:33.173 [2024-10-01 15:24:31.582820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:33.173 [2024-10-01 15:24:31.582831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:33.173 [2024-10-01 15:24:31.582841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:33.173 [2024-10-01 15:24:31.582851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:33.173 [2024-10-01 15:24:31.582861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:33.173 [2024-10-01 15:24:31.582871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:33.173 [2024-10-01 15:24:31.582881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:33.173 [2024-10-01 15:24:31.582891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:33.173 [2024-10-01 15:24:31.582902] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:33.173 [2024-10-01 15:24:31.582913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.173 [2024-10-01 15:24:31.582923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:33.173 [2024-10-01 15:24:31.582934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:33.173 [2024-10-01 15:24:31.582943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:33.173 [2024-10-01 15:24:31.582953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:33.173 [2024-10-01 15:24:31.582967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.582977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:33.173 [2024-10-01 15:24:31.582987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:24:33.173 [2024-10-01 15:24:31.582997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.602382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.602462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:33.173 [2024-10-01 15:24:31.602483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.351 ms 00:24:33.173 [2024-10-01 15:24:31.602497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.602617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.602633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:33.173 [2024-10-01 15:24:31.602646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:33.173 [2024-10-01 15:24:31.602682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.613668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.613721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:33.173 [2024-10-01 15:24:31.613735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.896 ms 00:24:33.173 [2024-10-01 15:24:31.613746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.613800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.613811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:33.173 [2024-10-01 15:24:31.613823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:33.173 [2024-10-01 15:24:31.613833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.614324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.614353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:33.173 [2024-10-01 15:24:31.614365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:24:33.173 [2024-10-01 15:24:31.614375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.614496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.614510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:33.173 [2024-10-01 15:24:31.614520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:24:33.173 [2024-10-01 15:24:31.614530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.620454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.620501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:33.173 [2024-10-01 15:24:31.620518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.911 ms 00:24:33.173 [2024-10-01 15:24:31.620529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.623146] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:33.173 [2024-10-01 15:24:31.623196] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:33.173 [2024-10-01 15:24:31.623211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.623222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:33.173 [2024-10-01 15:24:31.623233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:24:33.173 [2024-10-01 15:24:31.623243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.636565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.636607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:33.173 [2024-10-01 15:24:31.636634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.295 ms 00:24:33.173 [2024-10-01 15:24:31.636645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.638647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.638680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:33.173 [2024-10-01 15:24:31.638692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.956 ms 00:24:33.173 [2024-10-01 15:24:31.638702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.640094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.640126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:33.173 [2024-10-01 15:24:31.640138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:24:33.173 [2024-10-01 15:24:31.640148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.640464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.640492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:33.173 [2024-10-01 15:24:31.640504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:24:33.173 [2024-10-01 15:24:31.640515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.661076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.661158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:33.173 [2024-10-01 15:24:31.661191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.572 ms 00:24:33.173 [2024-10-01 15:24:31.661202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.667597] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:33.173 [2024-10-01 15:24:31.670755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.670789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:33.173 [2024-10-01 15:24:31.670803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.514 ms 00:24:33.173 [2024-10-01 15:24:31.670823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.670918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.670930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:33.173 [2024-10-01 15:24:31.670942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:33.173 [2024-10-01 15:24:31.670952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.671848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.671876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:33.173 [2024-10-01 15:24:31.671887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:24:33.173 [2024-10-01 15:24:31.671901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.671926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.671938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:33.173 [2024-10-01 15:24:31.671956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:33.173 [2024-10-01 15:24:31.671967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.672006] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:33.173 [2024-10-01 15:24:31.672024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.672034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:33.173 [2024-10-01 15:24:31.672045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:33.173 [2024-10-01 15:24:31.672061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.675507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.173 [2024-10-01 15:24:31.675543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:33.173 [2024-10-01 15:24:31.675557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.428 ms 00:24:33.173 [2024-10-01 15:24:31.675577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.173 [2024-10-01 15:24:31.675654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.174 [2024-10-01 15:24:31.675667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:33.174 [2024-10-01 15:24:31.675678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:33.174 [2024-10-01 15:24:31.675696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.174 [2024-10-01 15:24:31.676780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.696 ms, result 0 00:25:09.393  Copying: 27/1024 [MB] (27 MBps) Copying: 55/1024 [MB] (27 MBps) Copying: 83/1024 [MB] (27 MBps) Copying: 111/1024 [MB] (27 MBps) Copying: 139/1024 [MB] (28 MBps) Copying: 167/1024 [MB] (27 MBps) Copying: 196/1024 [MB] (28 MBps) Copying: 225/1024 [MB] (28 MBps) Copying: 253/1024 [MB] (28 MBps) Copying: 280/1024 [MB] (27 MBps) Copying: 310/1024 [MB] (29 MBps) Copying: 338/1024 [MB] (27 MBps) Copying: 366/1024 [MB] (28 MBps) Copying: 394/1024 [MB] (28 MBps) Copying: 422/1024 [MB] (28 MBps) Copying: 451/1024 [MB] (28 MBps) Copying: 479/1024 [MB] (28 MBps) Copying: 507/1024 [MB] (27 MBps) Copying: 536/1024 [MB] (28 MBps) Copying: 564/1024 [MB] (27 MBps) Copying: 592/1024 [MB] (28 MBps) Copying: 620/1024 [MB] (28 MBps) Copying: 648/1024 [MB] (28 MBps) Copying: 677/1024 [MB] (28 MBps) Copying: 705/1024 [MB] (28 MBps) Copying: 735/1024 [MB] (30 MBps) Copying: 765/1024 [MB] (29 MBps) Copying: 794/1024 [MB] (29 MBps) Copying: 824/1024 [MB] (29 MBps) Copying: 853/1024 [MB] (29 MBps) Copying: 882/1024 [MB] (28 MBps) Copying: 912/1024 [MB] (30 MBps) Copying: 943/1024 [MB] (30 MBps) Copying: 972/1024 [MB] (29 MBps) Copying: 1002/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-10-01 15:25:07.688906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.688987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:09.393 [2024-10-01 15:25:07.689009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:09.393 [2024-10-01 15:25:07.689024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.689079] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:09.393 [2024-10-01 15:25:07.689811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.689837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:09.393 [2024-10-01 15:25:07.689853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:25:09.393 [2024-10-01 15:25:07.689867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.690393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.690420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:09.393 [2024-10-01 15:25:07.690436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:25:09.393 [2024-10-01 15:25:07.690450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.694674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.694729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:09.393 [2024-10-01 15:25:07.694746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 00:25:09.393 [2024-10-01 15:25:07.694760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.700831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.700865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:09.393 [2024-10-01 15:25:07.700878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.048 ms 00:25:09.393 [2024-10-01 15:25:07.700889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.702916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.702955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:09.393 [2024-10-01 15:25:07.702967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.949 ms 00:25:09.393 [2024-10-01 15:25:07.702976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.706475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.706527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:09.393 [2024-10-01 15:25:07.706540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.473 ms 00:25:09.393 [2024-10-01 15:25:07.706550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.708401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.708437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:09.393 [2024-10-01 15:25:07.708451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.817 ms 00:25:09.393 [2024-10-01 15:25:07.708474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.710386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.710421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:09.393 [2024-10-01 15:25:07.710432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:25:09.393 [2024-10-01 15:25:07.710443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.711751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.711785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:09.393 [2024-10-01 15:25:07.711796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:25:09.393 [2024-10-01 15:25:07.711806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.712971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.713007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:09.393 [2024-10-01 15:25:07.713019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:25:09.393 [2024-10-01 15:25:07.713027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.714140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.393 [2024-10-01 15:25:07.714191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:09.393 [2024-10-01 15:25:07.714203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:25:09.393 [2024-10-01 15:25:07.714213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.393 [2024-10-01 15:25:07.714238] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:09.393 [2024-10-01 15:25:07.714260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:09.393 [2024-10-01 15:25:07.714273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:09.393 [2024-10-01 15:25:07.714284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:09.393 [2024-10-01 15:25:07.714535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.714999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:09.394 [2024-10-01 15:25:07.715330] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:09.394 [2024-10-01 15:25:07.715341] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49acedf4-6ec5-40c7-ad30-b97ab3bf08c2 00:25:09.394 [2024-10-01 15:25:07.715352] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:09.394 [2024-10-01 15:25:07.715361] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:09.394 [2024-10-01 15:25:07.715385] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:09.394 [2024-10-01 15:25:07.715396] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:09.394 [2024-10-01 15:25:07.715406] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:09.394 [2024-10-01 15:25:07.715423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:09.394 [2024-10-01 15:25:07.715433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:09.394 [2024-10-01 15:25:07.715442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:09.394 [2024-10-01 15:25:07.715451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:09.394 [2024-10-01 15:25:07.715461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.394 [2024-10-01 15:25:07.715471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:09.394 [2024-10-01 15:25:07.715488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:25:09.394 [2024-10-01 15:25:07.715501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.394 [2024-10-01 15:25:07.717238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.394 [2024-10-01 15:25:07.717263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:09.394 [2024-10-01 15:25:07.717275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:25:09.394 [2024-10-01 15:25:07.717285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.394 [2024-10-01 15:25:07.717401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.394 [2024-10-01 15:25:07.717413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:09.395 [2024-10-01 15:25:07.717424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:25:09.395 [2024-10-01 15:25:07.717434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.723419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.723452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.395 [2024-10-01 15:25:07.723465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.723475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.723532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.723543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:09.395 [2024-10-01 15:25:07.723554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.723564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.723639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.723653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:09.395 [2024-10-01 15:25:07.723664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.723674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.723691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.723706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:09.395 [2024-10-01 15:25:07.723716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.723727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.737255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.737311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:09.395 [2024-10-01 15:25:07.737337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.737348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.745955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.746018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:09.395 [2024-10-01 15:25:07.746032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.746043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.746104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.746116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:09.395 [2024-10-01 15:25:07.746126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.746136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.746161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.746183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:09.395 [2024-10-01 15:25:07.746199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.746209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.746287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.746300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:09.395 [2024-10-01 15:25:07.746310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.746320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.746352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.746364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:09.395 [2024-10-01 15:25:07.746375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.746389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.746430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.746441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:09.395 [2024-10-01 15:25:07.746451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.746461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.746512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:09.395 [2024-10-01 15:25:07.746525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:09.395 [2024-10-01 15:25:07.746540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:09.395 [2024-10-01 15:25:07.746550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.395 [2024-10-01 15:25:07.746674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.832 ms, result 0 00:25:09.655 00:25:09.655 00:25:09.655 15:25:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:11.580 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:11.580 15:25:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:11.580 15:25:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:11.580 15:25:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:11.580 15:25:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:11.580 15:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:11.840 15:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:11.840 15:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:11.840 15:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89167 00:25:11.840 15:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89167 ']' 00:25:11.840 15:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89167 00:25:11.840 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89167) - No such process 00:25:11.840 Process with pid 89167 is not found 00:25:11.840 15:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89167 is not found' 00:25:11.840 15:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:12.099 15:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:12.099 15:25:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:12.099 Remove shared memory files 00:25:12.099 15:25:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:12.099 15:25:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:12.099 15:25:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:12.099 15:25:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:12.099 15:25:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:12.099 00:25:12.099 real 3m12.723s 00:25:12.099 user 3m36.054s 00:25:12.099 sys 0m37.512s 00:25:12.099 15:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:12.099 15:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:12.099 ************************************ 00:25:12.099 END TEST ftl_dirty_shutdown 00:25:12.099 ************************************ 00:25:12.099 15:25:10 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:12.099 15:25:10 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:12.099 15:25:10 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:12.099 15:25:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:12.099 ************************************ 00:25:12.099 START TEST ftl_upgrade_shutdown 00:25:12.099 ************************************ 00:25:12.099 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:12.358 * Looking for test storage... 00:25:12.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:12.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.358 --rc genhtml_branch_coverage=1 00:25:12.358 --rc genhtml_function_coverage=1 00:25:12.358 --rc genhtml_legend=1 00:25:12.358 --rc geninfo_all_blocks=1 00:25:12.358 --rc geninfo_unexecuted_blocks=1 00:25:12.358 00:25:12.358 ' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:12.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.358 --rc genhtml_branch_coverage=1 00:25:12.358 --rc genhtml_function_coverage=1 00:25:12.358 --rc genhtml_legend=1 00:25:12.358 --rc geninfo_all_blocks=1 00:25:12.358 --rc geninfo_unexecuted_blocks=1 00:25:12.358 00:25:12.358 ' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:12.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.358 --rc genhtml_branch_coverage=1 00:25:12.358 --rc genhtml_function_coverage=1 00:25:12.358 --rc genhtml_legend=1 00:25:12.358 --rc geninfo_all_blocks=1 00:25:12.358 --rc geninfo_unexecuted_blocks=1 00:25:12.358 00:25:12.358 ' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:12.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.358 --rc genhtml_branch_coverage=1 00:25:12.358 --rc genhtml_function_coverage=1 00:25:12.358 --rc genhtml_legend=1 00:25:12.358 --rc geninfo_all_blocks=1 00:25:12.358 --rc geninfo_unexecuted_blocks=1 00:25:12.358 00:25:12.358 ' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:12.358 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91276 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91276 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91276 ']' 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.359 15:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:12.617 [2024-10-01 15:25:10.928357] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:12.617 [2024-10-01 15:25:10.928511] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91276 ] 00:25:12.617 [2024-10-01 15:25:11.090580] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.617 [2024-10-01 15:25:11.140819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:13.548 15:25:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:13.805 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:13.805 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:13.805 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:13.805 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:25:13.805 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:13.805 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:13.805 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:13.805 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:14.063 { 00:25:14.063 "name": "basen1", 00:25:14.063 "aliases": [ 00:25:14.063 "28729248-db6b-4bed-a66a-f92b8a2744b2" 00:25:14.063 ], 00:25:14.063 "product_name": "NVMe disk", 00:25:14.063 "block_size": 4096, 00:25:14.063 "num_blocks": 1310720, 00:25:14.063 "uuid": "28729248-db6b-4bed-a66a-f92b8a2744b2", 00:25:14.063 "numa_id": -1, 00:25:14.063 "assigned_rate_limits": { 00:25:14.063 "rw_ios_per_sec": 0, 00:25:14.063 "rw_mbytes_per_sec": 0, 00:25:14.063 "r_mbytes_per_sec": 0, 00:25:14.063 "w_mbytes_per_sec": 0 00:25:14.063 }, 00:25:14.063 "claimed": true, 00:25:14.063 "claim_type": "read_many_write_one", 00:25:14.063 "zoned": false, 00:25:14.063 "supported_io_types": { 00:25:14.063 "read": true, 00:25:14.063 "write": true, 00:25:14.063 "unmap": true, 00:25:14.063 "flush": true, 00:25:14.063 "reset": true, 00:25:14.063 "nvme_admin": true, 00:25:14.063 "nvme_io": true, 00:25:14.063 "nvme_io_md": false, 00:25:14.063 "write_zeroes": true, 00:25:14.063 "zcopy": false, 00:25:14.063 "get_zone_info": false, 00:25:14.063 "zone_management": false, 00:25:14.063 "zone_append": false, 00:25:14.063 "compare": true, 00:25:14.063 "compare_and_write": false, 00:25:14.063 "abort": true, 00:25:14.063 "seek_hole": false, 00:25:14.063 "seek_data": false, 00:25:14.063 "copy": true, 00:25:14.063 "nvme_iov_md": false 00:25:14.063 }, 00:25:14.063 "driver_specific": { 00:25:14.063 "nvme": [ 00:25:14.063 { 00:25:14.063 "pci_address": "0000:00:11.0", 00:25:14.063 "trid": { 00:25:14.063 "trtype": "PCIe", 00:25:14.063 "traddr": "0000:00:11.0" 00:25:14.063 }, 00:25:14.063 "ctrlr_data": { 00:25:14.063 "cntlid": 0, 00:25:14.063 "vendor_id": "0x1b36", 00:25:14.063 "model_number": "QEMU NVMe Ctrl", 00:25:14.063 "serial_number": "12341", 00:25:14.063 "firmware_revision": "8.0.0", 00:25:14.063 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:14.063 "oacs": { 00:25:14.063 "security": 0, 00:25:14.063 "format": 1, 00:25:14.063 "firmware": 0, 00:25:14.063 "ns_manage": 1 00:25:14.063 }, 00:25:14.063 "multi_ctrlr": false, 00:25:14.063 "ana_reporting": false 00:25:14.063 }, 00:25:14.063 "vs": { 00:25:14.063 "nvme_version": "1.4" 00:25:14.063 }, 00:25:14.063 "ns_data": { 00:25:14.063 "id": 1, 00:25:14.063 "can_share": false 00:25:14.063 } 00:25:14.063 } 00:25:14.063 ], 00:25:14.063 "mp_policy": "active_passive" 00:25:14.063 } 00:25:14.063 } 00:25:14.063 ]' 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:14.063 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:14.320 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=9faad464-9cba-4c1e-9ba8-50cb5f91c523 00:25:14.320 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:14.320 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9faad464-9cba-4c1e-9ba8-50cb5f91c523 00:25:14.578 15:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:14.836 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f2d4594f-c264-45c6-a2df-fe44fa33fa95 00:25:14.836 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f2d4594f-c264-45c6-a2df-fe44fa33fa95 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=61979937-ce5d-4ffc-aee0-8986af2581f5 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 61979937-ce5d-4ffc-aee0-8986af2581f5 ]] 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 61979937-ce5d-4ffc-aee0-8986af2581f5 5120 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=61979937-ce5d-4ffc-aee0-8986af2581f5 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 61979937-ce5d-4ffc-aee0-8986af2581f5 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=61979937-ce5d-4ffc-aee0-8986af2581f5 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:15.094 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61979937-ce5d-4ffc-aee0-8986af2581f5 00:25:15.353 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:15.353 { 00:25:15.353 "name": "61979937-ce5d-4ffc-aee0-8986af2581f5", 00:25:15.353 "aliases": [ 00:25:15.353 "lvs/basen1p0" 00:25:15.353 ], 00:25:15.353 "product_name": "Logical Volume", 00:25:15.353 "block_size": 4096, 00:25:15.353 "num_blocks": 5242880, 00:25:15.353 "uuid": "61979937-ce5d-4ffc-aee0-8986af2581f5", 00:25:15.353 "assigned_rate_limits": { 00:25:15.353 "rw_ios_per_sec": 0, 00:25:15.353 "rw_mbytes_per_sec": 0, 00:25:15.353 "r_mbytes_per_sec": 0, 00:25:15.353 "w_mbytes_per_sec": 0 00:25:15.353 }, 00:25:15.353 "claimed": false, 00:25:15.353 "zoned": false, 00:25:15.353 "supported_io_types": { 00:25:15.353 "read": true, 00:25:15.353 "write": true, 00:25:15.353 "unmap": true, 00:25:15.353 "flush": false, 00:25:15.353 "reset": true, 00:25:15.353 "nvme_admin": false, 00:25:15.353 "nvme_io": false, 00:25:15.353 "nvme_io_md": false, 00:25:15.353 "write_zeroes": true, 00:25:15.353 "zcopy": false, 00:25:15.353 "get_zone_info": false, 00:25:15.353 "zone_management": false, 00:25:15.353 "zone_append": false, 00:25:15.353 "compare": false, 00:25:15.353 "compare_and_write": false, 00:25:15.353 "abort": false, 00:25:15.353 "seek_hole": true, 00:25:15.353 "seek_data": true, 00:25:15.353 "copy": false, 00:25:15.353 "nvme_iov_md": false 00:25:15.353 }, 00:25:15.353 "driver_specific": { 00:25:15.353 "lvol": { 00:25:15.353 "lvol_store_uuid": "f2d4594f-c264-45c6-a2df-fe44fa33fa95", 00:25:15.353 "base_bdev": "basen1", 00:25:15.353 "thin_provision": true, 00:25:15.353 "num_allocated_clusters": 0, 00:25:15.353 "snapshot": false, 00:25:15.353 "clone": false, 00:25:15.353 "esnap_clone": false 00:25:15.353 } 00:25:15.353 } 00:25:15.353 } 00:25:15.353 ]' 00:25:15.353 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:15.353 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:15.353 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:15.353 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:25:15.353 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:25:15.353 15:25:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:25:15.353 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:15.353 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:15.353 15:25:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:15.612 15:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:15.612 15:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:15.612 15:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:15.871 15:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:15.871 15:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:15.871 15:25:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 61979937-ce5d-4ffc-aee0-8986af2581f5 -c cachen1p0 --l2p_dram_limit 2 00:25:16.131 [2024-10-01 15:25:14.473392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.473467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:16.131 [2024-10-01 15:25:14.473486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:16.131 [2024-10-01 15:25:14.473500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.473571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.473587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:16.131 [2024-10-01 15:25:14.473598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:25:16.131 [2024-10-01 15:25:14.473618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.473644] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:16.131 [2024-10-01 15:25:14.473973] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:16.131 [2024-10-01 15:25:14.474000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.474015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:16.131 [2024-10-01 15:25:14.474030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:25:16.131 [2024-10-01 15:25:14.474043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.474227] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 05c080d4-a80f-4345-899f-f3776a4bb0d9 00:25:16.131 [2024-10-01 15:25:14.475709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.475744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:16.131 [2024-10-01 15:25:14.475761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:25:16.131 [2024-10-01 15:25:14.475773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.483265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.483299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:16.131 [2024-10-01 15:25:14.483314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.448 ms 00:25:16.131 [2024-10-01 15:25:14.483336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.483390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.483404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:16.131 [2024-10-01 15:25:14.483425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:16.131 [2024-10-01 15:25:14.483440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.483518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.483537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:16.131 [2024-10-01 15:25:14.483552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:16.131 [2024-10-01 15:25:14.483587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.483621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:16.131 [2024-10-01 15:25:14.485477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.485512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:16.131 [2024-10-01 15:25:14.485528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.873 ms 00:25:16.131 [2024-10-01 15:25:14.485542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.485572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.485586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:16.131 [2024-10-01 15:25:14.485597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:16.131 [2024-10-01 15:25:14.485613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.485632] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:16.131 [2024-10-01 15:25:14.485765] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:16.131 [2024-10-01 15:25:14.485781] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:16.131 [2024-10-01 15:25:14.485797] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:16.131 [2024-10-01 15:25:14.485811] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:16.131 [2024-10-01 15:25:14.485840] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:16.131 [2024-10-01 15:25:14.485852] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:16.131 [2024-10-01 15:25:14.485870] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:16.131 [2024-10-01 15:25:14.485880] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:16.131 [2024-10-01 15:25:14.485895] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:16.131 [2024-10-01 15:25:14.485908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.485922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:16.131 [2024-10-01 15:25:14.485932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:25:16.131 [2024-10-01 15:25:14.485953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.486035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.131 [2024-10-01 15:25:14.486053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:16.131 [2024-10-01 15:25:14.486063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:25:16.131 [2024-10-01 15:25:14.486084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.131 [2024-10-01 15:25:14.486191] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:16.131 [2024-10-01 15:25:14.486210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:16.131 [2024-10-01 15:25:14.486222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:16.131 [2024-10-01 15:25:14.486235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:16.131 [2024-10-01 15:25:14.486246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:16.131 [2024-10-01 15:25:14.486259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:16.131 [2024-10-01 15:25:14.486269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:16.132 [2024-10-01 15:25:14.486283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:16.132 [2024-10-01 15:25:14.486293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:16.132 [2024-10-01 15:25:14.486306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:16.132 [2024-10-01 15:25:14.486315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:16.132 [2024-10-01 15:25:14.486328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:16.132 [2024-10-01 15:25:14.486337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:16.132 [2024-10-01 15:25:14.486352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:16.132 [2024-10-01 15:25:14.486362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:16.132 [2024-10-01 15:25:14.486375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:16.132 [2024-10-01 15:25:14.486385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:16.132 [2024-10-01 15:25:14.486413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:16.132 [2024-10-01 15:25:14.486423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:16.132 [2024-10-01 15:25:14.486436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:16.132 [2024-10-01 15:25:14.486446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:16.132 [2024-10-01 15:25:14.486459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:16.132 [2024-10-01 15:25:14.486468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:16.132 [2024-10-01 15:25:14.486481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:16.132 [2024-10-01 15:25:14.486491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:16.132 [2024-10-01 15:25:14.486504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:16.132 [2024-10-01 15:25:14.486514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:16.132 [2024-10-01 15:25:14.486526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:16.132 [2024-10-01 15:25:14.486536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:16.132 [2024-10-01 15:25:14.486551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:16.132 [2024-10-01 15:25:14.486561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:16.132 [2024-10-01 15:25:14.486574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:16.132 [2024-10-01 15:25:14.486583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:16.132 [2024-10-01 15:25:14.486597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:16.132 [2024-10-01 15:25:14.486607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:16.132 [2024-10-01 15:25:14.486620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:16.132 [2024-10-01 15:25:14.486630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:16.132 [2024-10-01 15:25:14.486643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:16.132 [2024-10-01 15:25:14.486652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:16.132 [2024-10-01 15:25:14.486665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:16.132 [2024-10-01 15:25:14.486675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:16.132 [2024-10-01 15:25:14.486688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:16.132 [2024-10-01 15:25:14.486697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:16.132 [2024-10-01 15:25:14.486710] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:16.132 [2024-10-01 15:25:14.486720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:16.132 [2024-10-01 15:25:14.486737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:16.132 [2024-10-01 15:25:14.486748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:16.132 [2024-10-01 15:25:14.486761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:16.132 [2024-10-01 15:25:14.486771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:16.132 [2024-10-01 15:25:14.486784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:16.132 [2024-10-01 15:25:14.486794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:16.132 [2024-10-01 15:25:14.486807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:16.132 [2024-10-01 15:25:14.486817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:16.132 [2024-10-01 15:25:14.486834] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:16.132 [2024-10-01 15:25:14.486847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.132 [2024-10-01 15:25:14.486862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:16.132 [2024-10-01 15:25:14.486873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:16.132 [2024-10-01 15:25:14.486887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:16.132 [2024-10-01 15:25:14.486898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:16.132 [2024-10-01 15:25:14.486913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:16.132 [2024-10-01 15:25:14.486926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:16.132 [2024-10-01 15:25:14.486942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:16.132 [2024-10-01 15:25:14.486954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:16.132 [2024-10-01 15:25:14.486967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:16.132 [2024-10-01 15:25:14.486978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:16.132 [2024-10-01 15:25:14.486991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:16.132 [2024-10-01 15:25:14.487002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:16.132 [2024-10-01 15:25:14.487016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:16.132 [2024-10-01 15:25:14.487027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:16.132 [2024-10-01 15:25:14.487040] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:16.132 [2024-10-01 15:25:14.487055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.132 [2024-10-01 15:25:14.487070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:16.133 [2024-10-01 15:25:14.487082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:16.133 [2024-10-01 15:25:14.487096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:16.133 [2024-10-01 15:25:14.487107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:16.133 [2024-10-01 15:25:14.487122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:16.133 [2024-10-01 15:25:14.487133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:16.133 [2024-10-01 15:25:14.487151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.997 ms 00:25:16.133 [2024-10-01 15:25:14.487161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:16.133 [2024-10-01 15:25:14.487230] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:16.133 [2024-10-01 15:25:14.487245] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:19.416 [2024-10-01 15:25:17.500320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.500394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:19.416 [2024-10-01 15:25:17.500421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3017.976 ms 00:25:19.416 [2024-10-01 15:25:17.500433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.511608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.511669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:19.416 [2024-10-01 15:25:17.511690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.071 ms 00:25:19.416 [2024-10-01 15:25:17.511702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.511764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.511776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:19.416 [2024-10-01 15:25:17.511795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:25:19.416 [2024-10-01 15:25:17.511808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.522874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.522924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:19.416 [2024-10-01 15:25:17.522956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.008 ms 00:25:19.416 [2024-10-01 15:25:17.522968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.523017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.523029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:19.416 [2024-10-01 15:25:17.523047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:19.416 [2024-10-01 15:25:17.523070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.523590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.523629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:19.416 [2024-10-01 15:25:17.523645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.447 ms 00:25:19.416 [2024-10-01 15:25:17.523656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.523705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.523716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:19.416 [2024-10-01 15:25:17.523731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:19.416 [2024-10-01 15:25:17.523745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.551496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.551564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:19.416 [2024-10-01 15:25:17.551607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.765 ms 00:25:19.416 [2024-10-01 15:25:17.551628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.561725] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:19.416 [2024-10-01 15:25:17.562842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.562877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:19.416 [2024-10-01 15:25:17.562894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.078 ms 00:25:19.416 [2024-10-01 15:25:17.562910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.577149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.577212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:19.416 [2024-10-01 15:25:17.577228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.224 ms 00:25:19.416 [2024-10-01 15:25:17.577244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.577345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.577362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:19.416 [2024-10-01 15:25:17.577374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:25:19.416 [2024-10-01 15:25:17.577387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.416 [2024-10-01 15:25:17.580033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.416 [2024-10-01 15:25:17.580075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:19.416 [2024-10-01 15:25:17.580087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.628 ms 00:25:19.416 [2024-10-01 15:25:17.580101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.417 [2024-10-01 15:25:17.582866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.417 [2024-10-01 15:25:17.582905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:19.417 [2024-10-01 15:25:17.582917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.730 ms 00:25:19.417 [2024-10-01 15:25:17.582930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.417 [2024-10-01 15:25:17.583238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.417 [2024-10-01 15:25:17.583258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:19.417 [2024-10-01 15:25:17.583271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:25:19.417 [2024-10-01 15:25:17.583286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.417 [2024-10-01 15:25:17.614719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.417 [2024-10-01 15:25:17.614781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:19.417 [2024-10-01 15:25:17.614799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.440 ms 00:25:19.417 [2024-10-01 15:25:17.614814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.417 [2024-10-01 15:25:17.618854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.417 [2024-10-01 15:25:17.618904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:19.417 [2024-10-01 15:25:17.618919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.988 ms 00:25:19.417 [2024-10-01 15:25:17.618933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.417 [2024-10-01 15:25:17.622103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.417 [2024-10-01 15:25:17.622150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:19.417 [2024-10-01 15:25:17.622163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.138 ms 00:25:19.417 [2024-10-01 15:25:17.622190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.417 [2024-10-01 15:25:17.625387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.417 [2024-10-01 15:25:17.625430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:19.417 [2024-10-01 15:25:17.625443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.164 ms 00:25:19.417 [2024-10-01 15:25:17.625460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.417 [2024-10-01 15:25:17.625503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.417 [2024-10-01 15:25:17.625519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:19.417 [2024-10-01 15:25:17.625531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:19.417 [2024-10-01 15:25:17.625553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.417 [2024-10-01 15:25:17.625646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.417 [2024-10-01 15:25:17.625662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:19.417 [2024-10-01 15:25:17.625673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:25:19.417 [2024-10-01 15:25:17.625686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.417 [2024-10-01 15:25:17.626804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3158.084 ms, result 0 00:25:19.417 { 00:25:19.417 "name": "ftl", 00:25:19.417 "uuid": "05c080d4-a80f-4345-899f-f3776a4bb0d9" 00:25:19.417 } 00:25:19.417 15:25:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:19.417 [2024-10-01 15:25:17.869555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.417 15:25:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:19.674 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:19.987 [2024-10-01 15:25:18.341291] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:19.987 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:20.244 [2024-10-01 15:25:18.569429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:20.244 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:20.502 Fill FTL, iteration 1 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91398 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91398 /var/tmp/spdk.tgt.sock 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91398 ']' 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:20.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:20.502 15:25:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:20.762 [2024-10-01 15:25:19.084288] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:20.762 [2024-10-01 15:25:19.084468] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91398 ] 00:25:20.762 [2024-10-01 15:25:19.250567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.762 [2024-10-01 15:25:19.298587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.695 15:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:21.695 15:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:21.695 15:25:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:21.695 ftln1 00:25:21.695 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:21.695 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91398 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91398 ']' 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91398 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91398 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:21.953 killing process with pid 91398 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91398' 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91398 00:25:21.953 15:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91398 00:25:22.518 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:22.518 15:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:22.518 [2024-10-01 15:25:20.974063] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:22.518 [2024-10-01 15:25:20.974216] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91424 ] 00:25:22.776 [2024-10-01 15:25:21.143130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.776 [2024-10-01 15:25:21.187956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.220  Copying: 247/1024 [MB] (247 MBps) Copying: 495/1024 [MB] (248 MBps) Copying: 744/1024 [MB] (249 MBps) Copying: 999/1024 [MB] (255 MBps) Copying: 1024/1024 [MB] (average 249 MBps) 00:25:27.220 00:25:27.220 15:25:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:27.220 Calculate MD5 checksum, iteration 1 00:25:27.220 15:25:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:27.220 15:25:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:27.220 15:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:27.220 15:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:27.220 15:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:27.221 15:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:27.221 15:25:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:27.481 [2024-10-01 15:25:25.844341] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:27.481 [2024-10-01 15:25:25.844534] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91477 ] 00:25:27.740 [2024-10-01 15:25:26.038431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.740 [2024-10-01 15:25:26.087674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.635  Copying: 657/1024 [MB] (657 MBps) Copying: 1024/1024 [MB] (average 650 MBps) 00:25:29.635 00:25:29.635 15:25:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:29.635 15:25:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:31.538 Fill FTL, iteration 2 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d0a1005553f5719a71dc95c0e8e89b06 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:31.538 15:25:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:31.538 [2024-10-01 15:25:29.905868] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:31.538 [2024-10-01 15:25:29.906015] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91527 ] 00:25:31.538 [2024-10-01 15:25:30.073587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.797 [2024-10-01 15:25:30.122657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.360  Copying: 237/1024 [MB] (237 MBps) Copying: 478/1024 [MB] (241 MBps) Copying: 712/1024 [MB] (234 MBps) Copying: 945/1024 [MB] (233 MBps) Copying: 1024/1024 [MB] (average 236 MBps) 00:25:36.360 00:25:36.620 Calculate MD5 checksum, iteration 2 00:25:36.620 15:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:36.620 15:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:36.620 15:25:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:36.620 15:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:36.620 15:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:36.620 15:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:36.620 15:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:36.620 15:25:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:36.620 [2024-10-01 15:25:35.016433] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:36.620 [2024-10-01 15:25:35.016597] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91580 ] 00:25:36.879 [2024-10-01 15:25:35.182674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.879 [2024-10-01 15:25:35.231844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.395  Copying: 669/1024 [MB] (669 MBps) Copying: 1024/1024 [MB] (average 660 MBps) 00:25:39.395 00:25:39.395 15:25:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:39.395 15:25:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:41.299 15:25:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:41.299 15:25:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=81f51b8c55d7c70b6589b536555825a3 00:25:41.299 15:25:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:41.299 15:25:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:41.299 15:25:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:41.299 [2024-10-01 15:25:39.815794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.299 [2024-10-01 15:25:39.815855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:41.299 [2024-10-01 15:25:39.815873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:25:41.299 [2024-10-01 15:25:39.815884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.299 [2024-10-01 15:25:39.815921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.299 [2024-10-01 15:25:39.815941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:41.299 [2024-10-01 15:25:39.815956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:41.299 [2024-10-01 15:25:39.815966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.299 [2024-10-01 15:25:39.815988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.299 [2024-10-01 15:25:39.815998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:41.300 [2024-10-01 15:25:39.816009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:41.300 [2024-10-01 15:25:39.816019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.300 [2024-10-01 15:25:39.816084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.292 ms, result 0 00:25:41.300 true 00:25:41.558 15:25:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:41.558 { 00:25:41.558 "name": "ftl", 00:25:41.558 "properties": [ 00:25:41.558 { 00:25:41.558 "name": "superblock_version", 00:25:41.558 "value": 5, 00:25:41.558 "read-only": true 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "name": "base_device", 00:25:41.558 "bands": [ 00:25:41.558 { 00:25:41.558 "id": 0, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 1, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 2, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 3, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 4, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 5, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 6, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 7, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 8, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 9, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 10, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 11, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 12, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 13, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 14, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 15, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 16, 00:25:41.558 "state": "FREE", 00:25:41.558 "validity": 0.0 00:25:41.558 }, 00:25:41.558 { 00:25:41.558 "id": 17, 00:25:41.558 "state": "FREE", 00:25:41.559 "validity": 0.0 00:25:41.559 } 00:25:41.559 ], 00:25:41.559 "read-only": true 00:25:41.559 }, 00:25:41.559 { 00:25:41.559 "name": "cache_device", 00:25:41.559 "type": "bdev", 00:25:41.559 "chunks": [ 00:25:41.559 { 00:25:41.559 "id": 0, 00:25:41.559 "state": "INACTIVE", 00:25:41.559 "utilization": 0.0 00:25:41.559 }, 00:25:41.559 { 00:25:41.559 "id": 1, 00:25:41.559 "state": "CLOSED", 00:25:41.559 "utilization": 1.0 00:25:41.559 }, 00:25:41.559 { 00:25:41.559 "id": 2, 00:25:41.559 "state": "CLOSED", 00:25:41.559 "utilization": 1.0 00:25:41.559 }, 00:25:41.559 { 00:25:41.559 "id": 3, 00:25:41.559 "state": "OPEN", 00:25:41.559 "utilization": 0.001953125 00:25:41.559 }, 00:25:41.559 { 00:25:41.559 "id": 4, 00:25:41.559 "state": "OPEN", 00:25:41.559 "utilization": 0.0 00:25:41.559 } 00:25:41.559 ], 00:25:41.559 "read-only": true 00:25:41.559 }, 00:25:41.559 { 00:25:41.559 "name": "verbose_mode", 00:25:41.559 "value": true, 00:25:41.559 "unit": "", 00:25:41.559 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:41.559 }, 00:25:41.559 { 00:25:41.559 "name": "prep_upgrade_on_shutdown", 00:25:41.559 "value": false, 00:25:41.559 "unit": "", 00:25:41.559 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:41.559 } 00:25:41.559 ] 00:25:41.559 } 00:25:41.559 15:25:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:41.818 [2024-10-01 15:25:40.255044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.818 [2024-10-01 15:25:40.255104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:41.818 [2024-10-01 15:25:40.255121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:41.818 [2024-10-01 15:25:40.255132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.818 [2024-10-01 15:25:40.255160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.818 [2024-10-01 15:25:40.255184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:41.818 [2024-10-01 15:25:40.255195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:41.818 [2024-10-01 15:25:40.255205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.818 [2024-10-01 15:25:40.255227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.818 [2024-10-01 15:25:40.255238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:41.818 [2024-10-01 15:25:40.255248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:41.818 [2024-10-01 15:25:40.255259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.818 [2024-10-01 15:25:40.255320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.268 ms, result 0 00:25:41.818 true 00:25:41.818 15:25:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:41.818 15:25:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:41.818 15:25:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:42.076 15:25:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:42.076 15:25:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:42.076 15:25:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:42.333 [2024-10-01 15:25:40.762643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.333 [2024-10-01 15:25:40.762699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:42.333 [2024-10-01 15:25:40.762715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:42.333 [2024-10-01 15:25:40.762726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.333 [2024-10-01 15:25:40.762753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.333 [2024-10-01 15:25:40.762765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:42.333 [2024-10-01 15:25:40.762775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:42.333 [2024-10-01 15:25:40.762786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.333 [2024-10-01 15:25:40.762807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.333 [2024-10-01 15:25:40.762818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:42.333 [2024-10-01 15:25:40.762828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:42.333 [2024-10-01 15:25:40.762838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.333 [2024-10-01 15:25:40.762897] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.247 ms, result 0 00:25:42.333 true 00:25:42.333 15:25:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:42.610 { 00:25:42.610 "name": "ftl", 00:25:42.610 "properties": [ 00:25:42.610 { 00:25:42.610 "name": "superblock_version", 00:25:42.610 "value": 5, 00:25:42.610 "read-only": true 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "name": "base_device", 00:25:42.610 "bands": [ 00:25:42.610 { 00:25:42.610 "id": 0, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 1, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 2, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 3, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 4, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 5, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 6, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 7, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 8, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 9, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 10, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 11, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 12, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 13, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 14, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 15, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 16, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 17, 00:25:42.610 "state": "FREE", 00:25:42.610 "validity": 0.0 00:25:42.610 } 00:25:42.610 ], 00:25:42.610 "read-only": true 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "name": "cache_device", 00:25:42.610 "type": "bdev", 00:25:42.610 "chunks": [ 00:25:42.610 { 00:25:42.610 "id": 0, 00:25:42.610 "state": "INACTIVE", 00:25:42.610 "utilization": 0.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 1, 00:25:42.610 "state": "CLOSED", 00:25:42.610 "utilization": 1.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 2, 00:25:42.610 "state": "CLOSED", 00:25:42.610 "utilization": 1.0 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 3, 00:25:42.610 "state": "OPEN", 00:25:42.610 "utilization": 0.001953125 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "id": 4, 00:25:42.610 "state": "OPEN", 00:25:42.610 "utilization": 0.0 00:25:42.610 } 00:25:42.610 ], 00:25:42.610 "read-only": true 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "name": "verbose_mode", 00:25:42.610 "value": true, 00:25:42.610 "unit": "", 00:25:42.610 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:42.610 }, 00:25:42.610 { 00:25:42.610 "name": "prep_upgrade_on_shutdown", 00:25:42.610 "value": true, 00:25:42.610 "unit": "", 00:25:42.610 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:42.610 } 00:25:42.610 ] 00:25:42.610 } 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91276 ]] 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91276 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91276 ']' 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91276 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91276 00:25:42.610 killing process with pid 91276 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91276' 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91276 00:25:42.610 15:25:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91276 00:25:42.869 [2024-10-01 15:25:41.177446] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:42.869 [2024-10-01 15:25:41.182630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.869 [2024-10-01 15:25:41.182676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:42.869 [2024-10-01 15:25:41.182692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:42.869 [2024-10-01 15:25:41.182704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:42.869 [2024-10-01 15:25:41.182729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:42.869 [2024-10-01 15:25:41.183388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:42.869 [2024-10-01 15:25:41.183412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:42.869 [2024-10-01 15:25:41.183423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.644 ms 00:25:42.869 [2024-10-01 15:25:41.183442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.267075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.985 [2024-10-01 15:25:48.267159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:50.985 [2024-10-01 15:25:48.267187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7095.104 ms 00:25:50.985 [2024-10-01 15:25:48.267204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.268262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.985 [2024-10-01 15:25:48.268312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:50.985 [2024-10-01 15:25:48.268325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.040 ms 00:25:50.985 [2024-10-01 15:25:48.268336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.269307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.985 [2024-10-01 15:25:48.269343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:50.985 [2024-10-01 15:25:48.269356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.926 ms 00:25:50.985 [2024-10-01 15:25:48.269371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.271136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.985 [2024-10-01 15:25:48.271184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:50.985 [2024-10-01 15:25:48.271197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.699 ms 00:25:50.985 [2024-10-01 15:25:48.271207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.273520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.985 [2024-10-01 15:25:48.273563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:50.985 [2024-10-01 15:25:48.273576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.285 ms 00:25:50.985 [2024-10-01 15:25:48.273587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.273670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.985 [2024-10-01 15:25:48.273684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:50.985 [2024-10-01 15:25:48.273701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:25:50.985 [2024-10-01 15:25:48.273711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.275033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.985 [2024-10-01 15:25:48.275070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:25:50.985 [2024-10-01 15:25:48.275081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.307 ms 00:25:50.985 [2024-10-01 15:25:48.275091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.276543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.985 [2024-10-01 15:25:48.276579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:25:50.985 [2024-10-01 15:25:48.276591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.409 ms 00:25:50.985 [2024-10-01 15:25:48.276601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.277898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.985 [2024-10-01 15:25:48.277933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:50.985 [2024-10-01 15:25:48.277945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.258 ms 00:25:50.985 [2024-10-01 15:25:48.277954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.279028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.985 [2024-10-01 15:25:48.279063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:50.985 [2024-10-01 15:25:48.279074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.017 ms 00:25:50.985 [2024-10-01 15:25:48.279084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.985 [2024-10-01 15:25:48.279109] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:50.985 [2024-10-01 15:25:48.279127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:50.985 [2024-10-01 15:25:48.279139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:50.985 [2024-10-01 15:25:48.279151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:50.985 [2024-10-01 15:25:48.279162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:50.985 [2024-10-01 15:25:48.279188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:50.985 [2024-10-01 15:25:48.279200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:50.986 [2024-10-01 15:25:48.279339] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:50.986 [2024-10-01 15:25:48.279348] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 05c080d4-a80f-4345-899f-f3776a4bb0d9 00:25:50.986 [2024-10-01 15:25:48.279360] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:50.986 [2024-10-01 15:25:48.279379] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:25:50.986 [2024-10-01 15:25:48.279389] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:25:50.986 [2024-10-01 15:25:48.279400] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:25:50.986 [2024-10-01 15:25:48.279410] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:50.986 [2024-10-01 15:25:48.279426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:50.986 [2024-10-01 15:25:48.279437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:50.986 [2024-10-01 15:25:48.279446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:50.986 [2024-10-01 15:25:48.279455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:50.986 [2024-10-01 15:25:48.279464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.986 [2024-10-01 15:25:48.279474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:50.986 [2024-10-01 15:25:48.279485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:25:50.986 [2024-10-01 15:25:48.279495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.281323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.986 [2024-10-01 15:25:48.281349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:50.986 [2024-10-01 15:25:48.281361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.812 ms 00:25:50.986 [2024-10-01 15:25:48.281377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.281504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.986 [2024-10-01 15:25:48.281516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:50.986 [2024-10-01 15:25:48.281526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.086 ms 00:25:50.986 [2024-10-01 15:25:48.281537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.288449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.288483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:50.986 [2024-10-01 15:25:48.288501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.288511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.288544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.288555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:50.986 [2024-10-01 15:25:48.288566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.288577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.288641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.288656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:50.986 [2024-10-01 15:25:48.288666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.288682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.288707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.288719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:50.986 [2024-10-01 15:25:48.288738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.288748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.302253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.302307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:50.986 [2024-10-01 15:25:48.302321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.302339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.310889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.310936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:50.986 [2024-10-01 15:25:48.310950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.310961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.311043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.311055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:50.986 [2024-10-01 15:25:48.311066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.311089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.311127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.311139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:50.986 [2024-10-01 15:25:48.311149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.311160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.311246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.311259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:50.986 [2024-10-01 15:25:48.311269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.311280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.311319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.311335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:50.986 [2024-10-01 15:25:48.311346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.311356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.311397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.311409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:50.986 [2024-10-01 15:25:48.311419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.311437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.311488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.986 [2024-10-01 15:25:48.311501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:50.986 [2024-10-01 15:25:48.311511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.986 [2024-10-01 15:25:48.311522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.986 [2024-10-01 15:25:48.311659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7140.577 ms, result 0 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91747 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91747 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91747 ']' 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.554 15:25:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:51.554 [2024-10-01 15:25:50.066510] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:51.554 [2024-10-01 15:25:50.066873] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91747 ] 00:25:51.831 [2024-10-01 15:25:50.235641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.831 [2024-10-01 15:25:50.284765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.088 [2024-10-01 15:25:50.595734] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:52.088 [2024-10-01 15:25:50.595814] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:52.347 [2024-10-01 15:25:50.739846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.347 [2024-10-01 15:25:50.739902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:52.347 [2024-10-01 15:25:50.739922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:52.347 [2024-10-01 15:25:50.739933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.347 [2024-10-01 15:25:50.740004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.347 [2024-10-01 15:25:50.740019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:52.347 [2024-10-01 15:25:50.740030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:25:52.347 [2024-10-01 15:25:50.740047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.347 [2024-10-01 15:25:50.740081] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:52.347 [2024-10-01 15:25:50.740376] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:52.347 [2024-10-01 15:25:50.740397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.347 [2024-10-01 15:25:50.740407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:52.347 [2024-10-01 15:25:50.740422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:25:52.347 [2024-10-01 15:25:50.740432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.347 [2024-10-01 15:25:50.741904] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:52.347 [2024-10-01 15:25:50.744544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.347 [2024-10-01 15:25:50.744584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:52.347 [2024-10-01 15:25:50.744598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.646 ms 00:25:52.347 [2024-10-01 15:25:50.744614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.347 [2024-10-01 15:25:50.744674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.347 [2024-10-01 15:25:50.744688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:52.347 [2024-10-01 15:25:50.744699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:25:52.347 [2024-10-01 15:25:50.744709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.348 [2024-10-01 15:25:50.751370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.348 [2024-10-01 15:25:50.751404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:52.348 [2024-10-01 15:25:50.751421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.614 ms 00:25:52.348 [2024-10-01 15:25:50.751431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.348 [2024-10-01 15:25:50.751486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.348 [2024-10-01 15:25:50.751505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:52.348 [2024-10-01 15:25:50.751516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:25:52.348 [2024-10-01 15:25:50.751527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.348 [2024-10-01 15:25:50.751600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.348 [2024-10-01 15:25:50.751613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:52.348 [2024-10-01 15:25:50.751624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:52.348 [2024-10-01 15:25:50.751639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.348 [2024-10-01 15:25:50.751667] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:52.348 [2024-10-01 15:25:50.753321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.348 [2024-10-01 15:25:50.753351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:52.348 [2024-10-01 15:25:50.753372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.665 ms 00:25:52.348 [2024-10-01 15:25:50.753382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.348 [2024-10-01 15:25:50.753412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.348 [2024-10-01 15:25:50.753423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:52.348 [2024-10-01 15:25:50.753434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:52.348 [2024-10-01 15:25:50.753454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.348 [2024-10-01 15:25:50.753480] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:52.348 [2024-10-01 15:25:50.753502] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:52.348 [2024-10-01 15:25:50.753537] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:52.348 [2024-10-01 15:25:50.753555] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:25:52.348 [2024-10-01 15:25:50.753644] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:52.348 [2024-10-01 15:25:50.753658] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:52.348 [2024-10-01 15:25:50.753675] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:52.348 [2024-10-01 15:25:50.753691] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:52.348 [2024-10-01 15:25:50.753703] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:52.348 [2024-10-01 15:25:50.753715] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:52.348 [2024-10-01 15:25:50.753725] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:52.348 [2024-10-01 15:25:50.753735] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:52.348 [2024-10-01 15:25:50.753745] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:52.348 [2024-10-01 15:25:50.753756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.348 [2024-10-01 15:25:50.753765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:52.348 [2024-10-01 15:25:50.753775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:25:52.348 [2024-10-01 15:25:50.753785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.348 [2024-10-01 15:25:50.753868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.348 [2024-10-01 15:25:50.753879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:52.348 [2024-10-01 15:25:50.753889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:25:52.348 [2024-10-01 15:25:50.753899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.348 [2024-10-01 15:25:50.753993] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:52.348 [2024-10-01 15:25:50.754008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:52.348 [2024-10-01 15:25:50.754020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:52.348 [2024-10-01 15:25:50.754030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:52.348 [2024-10-01 15:25:50.754056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:52.348 [2024-10-01 15:25:50.754076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:52.348 [2024-10-01 15:25:50.754085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:52.348 [2024-10-01 15:25:50.754094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:52.348 [2024-10-01 15:25:50.754114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:52.348 [2024-10-01 15:25:50.754123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:52.348 [2024-10-01 15:25:50.754151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:52.348 [2024-10-01 15:25:50.754161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:52.348 [2024-10-01 15:25:50.754198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:52.348 [2024-10-01 15:25:50.754208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:52.348 [2024-10-01 15:25:50.754227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:52.348 [2024-10-01 15:25:50.754236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:52.348 [2024-10-01 15:25:50.754245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:52.348 [2024-10-01 15:25:50.754254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:52.348 [2024-10-01 15:25:50.754264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:52.348 [2024-10-01 15:25:50.754273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:52.348 [2024-10-01 15:25:50.754282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:52.348 [2024-10-01 15:25:50.754291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:52.348 [2024-10-01 15:25:50.754300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:52.348 [2024-10-01 15:25:50.754309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:52.348 [2024-10-01 15:25:50.754318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:52.348 [2024-10-01 15:25:50.754328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:52.348 [2024-10-01 15:25:50.754337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:52.348 [2024-10-01 15:25:50.754350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:52.348 [2024-10-01 15:25:50.754369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:52.348 [2024-10-01 15:25:50.754378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:52.348 [2024-10-01 15:25:50.754396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:52.348 [2024-10-01 15:25:50.754423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:52.348 [2024-10-01 15:25:50.754432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754442] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:52.348 [2024-10-01 15:25:50.754455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:52.348 [2024-10-01 15:25:50.754465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:52.348 [2024-10-01 15:25:50.754475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:52.348 [2024-10-01 15:25:50.754485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:52.348 [2024-10-01 15:25:50.754495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:52.348 [2024-10-01 15:25:50.754507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:52.348 [2024-10-01 15:25:50.754516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:52.348 [2024-10-01 15:25:50.754525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:52.348 [2024-10-01 15:25:50.754535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:52.348 [2024-10-01 15:25:50.754546] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:52.348 [2024-10-01 15:25:50.754558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.348 [2024-10-01 15:25:50.754570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:52.348 [2024-10-01 15:25:50.754581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:52.348 [2024-10-01 15:25:50.754591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:52.348 [2024-10-01 15:25:50.754602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:52.348 [2024-10-01 15:25:50.754612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:52.348 [2024-10-01 15:25:50.754623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:52.349 [2024-10-01 15:25:50.754634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:52.349 [2024-10-01 15:25:50.754644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:52.349 [2024-10-01 15:25:50.754654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:52.349 [2024-10-01 15:25:50.754665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:52.349 [2024-10-01 15:25:50.754678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:52.349 [2024-10-01 15:25:50.754689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:52.349 [2024-10-01 15:25:50.754698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:52.349 [2024-10-01 15:25:50.754709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:52.349 [2024-10-01 15:25:50.754719] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:52.349 [2024-10-01 15:25:50.754737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.349 [2024-10-01 15:25:50.754748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:52.349 [2024-10-01 15:25:50.754758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:52.349 [2024-10-01 15:25:50.754768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:52.349 [2024-10-01 15:25:50.754778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:52.349 [2024-10-01 15:25:50.754792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:52.349 [2024-10-01 15:25:50.754802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:52.349 [2024-10-01 15:25:50.754813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:25:52.349 [2024-10-01 15:25:50.754823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:52.349 [2024-10-01 15:25:50.754888] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:52.349 [2024-10-01 15:25:50.754903] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:55.634 [2024-10-01 15:25:54.024253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.634 [2024-10-01 15:25:54.024323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:55.634 [2024-10-01 15:25:54.024340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3274.673 ms 00:25:55.634 [2024-10-01 15:25:54.024352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.634 [2024-10-01 15:25:54.035209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.634 [2024-10-01 15:25:54.035265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:55.634 [2024-10-01 15:25:54.035282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.765 ms 00:25:55.634 [2024-10-01 15:25:54.035293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.035371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.035384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:55.635 [2024-10-01 15:25:54.035396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:55.635 [2024-10-01 15:25:54.035418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.053649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.053699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:55.635 [2024-10-01 15:25:54.053714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.178 ms 00:25:55.635 [2024-10-01 15:25:54.053734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.053785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.053796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:55.635 [2024-10-01 15:25:54.053807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:55.635 [2024-10-01 15:25:54.053817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.054309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.054331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:55.635 [2024-10-01 15:25:54.054343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.433 ms 00:25:55.635 [2024-10-01 15:25:54.054354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.054405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.054419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:55.635 [2024-10-01 15:25:54.054430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:25:55.635 [2024-10-01 15:25:54.054440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.061816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.061867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:55.635 [2024-10-01 15:25:54.061885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.363 ms 00:25:55.635 [2024-10-01 15:25:54.061909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.064839] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:55.635 [2024-10-01 15:25:54.064891] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:55.635 [2024-10-01 15:25:54.064912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.064925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:25:55.635 [2024-10-01 15:25:54.064940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.894 ms 00:25:55.635 [2024-10-01 15:25:54.064953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.069326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.069365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:25:55.635 [2024-10-01 15:25:54.069386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.321 ms 00:25:55.635 [2024-10-01 15:25:54.069396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.070758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.070794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:25:55.635 [2024-10-01 15:25:54.070807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.300 ms 00:25:55.635 [2024-10-01 15:25:54.070817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.072160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.072209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:25:55.635 [2024-10-01 15:25:54.072222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.308 ms 00:25:55.635 [2024-10-01 15:25:54.072232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.072521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.072538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:55.635 [2024-10-01 15:25:54.072549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:25:55.635 [2024-10-01 15:25:54.072560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.092240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.092309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:55.635 [2024-10-01 15:25:54.092334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.689 ms 00:25:55.635 [2024-10-01 15:25:54.092345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.098577] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:55.635 [2024-10-01 15:25:54.099342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.099371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:55.635 [2024-10-01 15:25:54.099383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.956 ms 00:25:55.635 [2024-10-01 15:25:54.099399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.099483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.099496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:25:55.635 [2024-10-01 15:25:54.099507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:55.635 [2024-10-01 15:25:54.099518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.099578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.099590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:55.635 [2024-10-01 15:25:54.099602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:25:55.635 [2024-10-01 15:25:54.099612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.099641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.099651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:55.635 [2024-10-01 15:25:54.099662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:55.635 [2024-10-01 15:25:54.099672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.099708] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:55.635 [2024-10-01 15:25:54.099720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.099731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:55.635 [2024-10-01 15:25:54.099741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:25:55.635 [2024-10-01 15:25:54.099760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.102956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.103001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:55.635 [2024-10-01 15:25:54.103015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.179 ms 00:25:55.635 [2024-10-01 15:25:54.103026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.103111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.635 [2024-10-01 15:25:54.103126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:55.635 [2024-10-01 15:25:54.103136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:25:55.635 [2024-10-01 15:25:54.103147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.635 [2024-10-01 15:25:54.104292] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3369.481 ms, result 0 00:25:55.635 [2024-10-01 15:25:54.119719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.635 [2024-10-01 15:25:54.135694] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:55.635 [2024-10-01 15:25:54.143814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:55.894 15:25:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.894 15:25:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:55.894 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:55.894 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:55.895 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:56.154 [2024-10-01 15:25:54.527351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.154 [2024-10-01 15:25:54.527402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:56.154 [2024-10-01 15:25:54.527418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:56.154 [2024-10-01 15:25:54.527431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:56.154 [2024-10-01 15:25:54.527456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.154 [2024-10-01 15:25:54.527468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:56.154 [2024-10-01 15:25:54.527488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:56.154 [2024-10-01 15:25:54.527498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:56.154 [2024-10-01 15:25:54.527525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.154 [2024-10-01 15:25:54.527537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:56.154 [2024-10-01 15:25:54.527547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:56.154 [2024-10-01 15:25:54.527557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:56.154 [2024-10-01 15:25:54.527632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.273 ms, result 0 00:25:56.154 true 00:25:56.154 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:56.413 { 00:25:56.413 "name": "ftl", 00:25:56.413 "properties": [ 00:25:56.413 { 00:25:56.413 "name": "superblock_version", 00:25:56.413 "value": 5, 00:25:56.413 "read-only": true 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "name": "base_device", 00:25:56.413 "bands": [ 00:25:56.413 { 00:25:56.413 "id": 0, 00:25:56.413 "state": "CLOSED", 00:25:56.413 "validity": 1.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 1, 00:25:56.413 "state": "CLOSED", 00:25:56.413 "validity": 1.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 2, 00:25:56.413 "state": "CLOSED", 00:25:56.413 "validity": 0.007843137254901933 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 3, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 4, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 5, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 6, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 7, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 8, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 9, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 10, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 11, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 12, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 13, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 14, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 15, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 16, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 17, 00:25:56.413 "state": "FREE", 00:25:56.413 "validity": 0.0 00:25:56.413 } 00:25:56.413 ], 00:25:56.413 "read-only": true 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "name": "cache_device", 00:25:56.413 "type": "bdev", 00:25:56.413 "chunks": [ 00:25:56.413 { 00:25:56.413 "id": 0, 00:25:56.413 "state": "INACTIVE", 00:25:56.413 "utilization": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 1, 00:25:56.413 "state": "OPEN", 00:25:56.413 "utilization": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 2, 00:25:56.413 "state": "OPEN", 00:25:56.413 "utilization": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 3, 00:25:56.413 "state": "FREE", 00:25:56.413 "utilization": 0.0 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "id": 4, 00:25:56.413 "state": "FREE", 00:25:56.413 "utilization": 0.0 00:25:56.413 } 00:25:56.413 ], 00:25:56.413 "read-only": true 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "name": "verbose_mode", 00:25:56.413 "value": true, 00:25:56.413 "unit": "", 00:25:56.413 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:56.413 }, 00:25:56.413 { 00:25:56.413 "name": "prep_upgrade_on_shutdown", 00:25:56.413 "value": false, 00:25:56.413 "unit": "", 00:25:56.413 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:56.413 } 00:25:56.413 ] 00:25:56.413 } 00:25:56.413 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:25:56.413 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:56.413 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:56.672 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:25:56.672 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:25:56.672 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:25:56.672 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:25:56.672 15:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:56.673 Validate MD5 checksum, iteration 1 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:56.673 15:25:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:56.932 [2024-10-01 15:25:55.278248] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:25:56.932 [2024-10-01 15:25:55.278413] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91816 ] 00:25:56.932 [2024-10-01 15:25:55.453357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.191 [2024-10-01 15:25:55.531035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.703  Copying: 691/1024 [MB] (691 MBps) Copying: 1024/1024 [MB] (average 664 MBps) 00:25:59.703 00:25:59.703 15:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:59.703 15:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:01.606 Validate MD5 checksum, iteration 2 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d0a1005553f5719a71dc95c0e8e89b06 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d0a1005553f5719a71dc95c0e8e89b06 != \d\0\a\1\0\0\5\5\5\3\f\5\7\1\9\a\7\1\d\c\9\5\c\0\e\8\e\8\9\b\0\6 ]] 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:01.606 15:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:01.606 [2024-10-01 15:25:59.958739] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:26:01.606 [2024-10-01 15:25:59.958892] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91866 ] 00:26:01.606 [2024-10-01 15:26:00.113954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.865 [2024-10-01 15:26:00.183574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.794  Copying: 706/1024 [MB] (706 MBps) Copying: 1024/1024 [MB] (average 706 MBps) 00:26:06.794 00:26:06.794 15:26:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:06.794 15:26:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=81f51b8c55d7c70b6589b536555825a3 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 81f51b8c55d7c70b6589b536555825a3 != \8\1\f\5\1\b\8\c\5\5\d\7\c\7\0\b\6\5\8\9\b\5\3\6\5\5\5\8\2\5\a\3 ]] 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 91747 ]] 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 91747 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91947 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91947 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91947 ']' 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.176 15:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:08.436 [2024-10-01 15:26:06.790229] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:26:08.436 [2024-10-01 15:26:06.790386] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91947 ] 00:26:08.436 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 91747 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:08.436 [2024-10-01 15:26:06.959333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.696 [2024-10-01 15:26:07.009291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.966 [2024-10-01 15:26:07.320735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:08.966 [2024-10-01 15:26:07.320809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:08.966 [2024-10-01 15:26:07.465135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.465208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:08.966 [2024-10-01 15:26:07.465228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:08.966 [2024-10-01 15:26:07.465239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.465313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.465327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:08.966 [2024-10-01 15:26:07.465338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:26:08.966 [2024-10-01 15:26:07.465348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.465388] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:08.966 [2024-10-01 15:26:07.465667] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:08.966 [2024-10-01 15:26:07.465686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.465696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:08.966 [2024-10-01 15:26:07.465710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:26:08.966 [2024-10-01 15:26:07.465720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.466183] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:08.966 [2024-10-01 15:26:07.470409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.470445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:08.966 [2024-10-01 15:26:07.470460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.247 ms 00:26:08.966 [2024-10-01 15:26:07.470476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.471610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.471638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:08.966 [2024-10-01 15:26:07.471650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:08.966 [2024-10-01 15:26:07.471661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.472081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.472095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:08.966 [2024-10-01 15:26:07.472109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:26:08.966 [2024-10-01 15:26:07.472119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.472161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.472192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:08.966 [2024-10-01 15:26:07.472211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:08.966 [2024-10-01 15:26:07.472221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.472269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.472282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:08.966 [2024-10-01 15:26:07.472292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:08.966 [2024-10-01 15:26:07.472308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.472339] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:08.966 [2024-10-01 15:26:07.473221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.473240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:08.966 [2024-10-01 15:26:07.473252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.890 ms 00:26:08.966 [2024-10-01 15:26:07.473269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.473296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.473307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:08.966 [2024-10-01 15:26:07.473317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:08.966 [2024-10-01 15:26:07.473331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.473373] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:08.966 [2024-10-01 15:26:07.473396] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:08.966 [2024-10-01 15:26:07.473430] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:08.966 [2024-10-01 15:26:07.473449] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:08.966 [2024-10-01 15:26:07.473538] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:08.966 [2024-10-01 15:26:07.473555] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:08.966 [2024-10-01 15:26:07.473571] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:08.966 [2024-10-01 15:26:07.473584] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:08.966 [2024-10-01 15:26:07.473596] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:08.966 [2024-10-01 15:26:07.473608] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:08.966 [2024-10-01 15:26:07.473618] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:08.966 [2024-10-01 15:26:07.473629] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:08.966 [2024-10-01 15:26:07.473640] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:08.966 [2024-10-01 15:26:07.473651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.473661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:08.966 [2024-10-01 15:26:07.473672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:26:08.966 [2024-10-01 15:26:07.473682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.473765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.966 [2024-10-01 15:26:07.473776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:08.966 [2024-10-01 15:26:07.473786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:26:08.966 [2024-10-01 15:26:07.473799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.966 [2024-10-01 15:26:07.473891] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:08.966 [2024-10-01 15:26:07.473904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:08.966 [2024-10-01 15:26:07.473914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:08.966 [2024-10-01 15:26:07.473925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.966 [2024-10-01 15:26:07.473935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:08.966 [2024-10-01 15:26:07.473945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:08.966 [2024-10-01 15:26:07.473954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:08.966 [2024-10-01 15:26:07.473964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:08.966 [2024-10-01 15:26:07.473974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:08.966 [2024-10-01 15:26:07.473983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.966 [2024-10-01 15:26:07.473993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:08.966 [2024-10-01 15:26:07.474003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:08.966 [2024-10-01 15:26:07.474022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.966 [2024-10-01 15:26:07.474032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:08.967 [2024-10-01 15:26:07.474041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:08.967 [2024-10-01 15:26:07.474057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.967 [2024-10-01 15:26:07.474066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:08.967 [2024-10-01 15:26:07.474076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:08.967 [2024-10-01 15:26:07.474085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.967 [2024-10-01 15:26:07.474095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:08.967 [2024-10-01 15:26:07.474104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:08.967 [2024-10-01 15:26:07.474114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:08.967 [2024-10-01 15:26:07.474123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:08.967 [2024-10-01 15:26:07.474133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:08.967 [2024-10-01 15:26:07.474142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:08.967 [2024-10-01 15:26:07.474151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:08.967 [2024-10-01 15:26:07.474161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:08.967 [2024-10-01 15:26:07.474182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:08.967 [2024-10-01 15:26:07.474193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:08.967 [2024-10-01 15:26:07.474203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:08.967 [2024-10-01 15:26:07.474212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:08.967 [2024-10-01 15:26:07.474225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:08.967 [2024-10-01 15:26:07.474234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:08.967 [2024-10-01 15:26:07.474244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.967 [2024-10-01 15:26:07.474253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:08.967 [2024-10-01 15:26:07.474262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:08.967 [2024-10-01 15:26:07.474272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.967 [2024-10-01 15:26:07.474281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:08.967 [2024-10-01 15:26:07.474290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:08.967 [2024-10-01 15:26:07.474299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.967 [2024-10-01 15:26:07.474309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:08.967 [2024-10-01 15:26:07.474319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:08.967 [2024-10-01 15:26:07.474329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.967 [2024-10-01 15:26:07.474338] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:08.967 [2024-10-01 15:26:07.474349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:08.967 [2024-10-01 15:26:07.474364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:08.967 [2024-10-01 15:26:07.474373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.967 [2024-10-01 15:26:07.474386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:08.967 [2024-10-01 15:26:07.474397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:08.967 [2024-10-01 15:26:07.474406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:08.967 [2024-10-01 15:26:07.474416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:08.967 [2024-10-01 15:26:07.474425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:08.967 [2024-10-01 15:26:07.474435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:08.967 [2024-10-01 15:26:07.474446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:08.967 [2024-10-01 15:26:07.474465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:08.967 [2024-10-01 15:26:07.474488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:08.967 [2024-10-01 15:26:07.474519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:08.967 [2024-10-01 15:26:07.474530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:08.967 [2024-10-01 15:26:07.474542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:08.967 [2024-10-01 15:26:07.474553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:08.967 [2024-10-01 15:26:07.474629] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:08.967 [2024-10-01 15:26:07.474641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:08.967 [2024-10-01 15:26:07.474662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:08.967 [2024-10-01 15:26:07.474673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:08.967 [2024-10-01 15:26:07.474684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:08.967 [2024-10-01 15:26:07.474695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.967 [2024-10-01 15:26:07.474705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:08.967 [2024-10-01 15:26:07.474716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.860 ms 00:26:08.967 [2024-10-01 15:26:07.474725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.967 [2024-10-01 15:26:07.484341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.967 [2024-10-01 15:26:07.484376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:08.967 [2024-10-01 15:26:07.484389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.566 ms 00:26:08.967 [2024-10-01 15:26:07.484400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.967 [2024-10-01 15:26:07.484453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.967 [2024-10-01 15:26:07.484469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:08.967 [2024-10-01 15:26:07.484480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:08.967 [2024-10-01 15:26:07.484491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.505668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.505717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:09.274 [2024-10-01 15:26:07.505735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.141 ms 00:26:09.274 [2024-10-01 15:26:07.505749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.505813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.505839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:09.274 [2024-10-01 15:26:07.505854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:09.274 [2024-10-01 15:26:07.505868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.506028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.506054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:09.274 [2024-10-01 15:26:07.506068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:26:09.274 [2024-10-01 15:26:07.506086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.506138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.506154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:09.274 [2024-10-01 15:26:07.506193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:09.274 [2024-10-01 15:26:07.506208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.514068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.514114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:09.274 [2024-10-01 15:26:07.514131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.843 ms 00:26:09.274 [2024-10-01 15:26:07.514145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.514296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.514316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:09.274 [2024-10-01 15:26:07.514330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:09.274 [2024-10-01 15:26:07.514353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.518801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.518845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:09.274 [2024-10-01 15:26:07.518859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.428 ms 00:26:09.274 [2024-10-01 15:26:07.518871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.520227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.520268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:09.274 [2024-10-01 15:26:07.520282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:26:09.274 [2024-10-01 15:26:07.520293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.539972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.540038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:09.274 [2024-10-01 15:26:07.540056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.654 ms 00:26:09.274 [2024-10-01 15:26:07.540073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.540248] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:09.274 [2024-10-01 15:26:07.540355] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:09.274 [2024-10-01 15:26:07.540453] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:09.274 [2024-10-01 15:26:07.540542] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:09.274 [2024-10-01 15:26:07.540555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.540566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:09.274 [2024-10-01 15:26:07.540577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:26:09.274 [2024-10-01 15:26:07.540588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.540638] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:09.274 [2024-10-01 15:26:07.540657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.540676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:09.274 [2024-10-01 15:26:07.540687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:09.274 [2024-10-01 15:26:07.540705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.274 [2024-10-01 15:26:07.543537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.274 [2024-10-01 15:26:07.543586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:09.275 [2024-10-01 15:26:07.543599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.786 ms 00:26:09.275 [2024-10-01 15:26:07.543609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.275 [2024-10-01 15:26:07.544194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.275 [2024-10-01 15:26:07.544217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:09.275 [2024-10-01 15:26:07.544230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:09.275 [2024-10-01 15:26:07.544240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.275 [2024-10-01 15:26:07.544312] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:09.275 [2024-10-01 15:26:07.544530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.275 [2024-10-01 15:26:07.544544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:09.275 [2024-10-01 15:26:07.544555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:26:09.275 [2024-10-01 15:26:07.544565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.844 [2024-10-01 15:26:08.093325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.844 [2024-10-01 15:26:08.093399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:09.844 [2024-10-01 15:26:08.093435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 549.346 ms 00:26:09.844 [2024-10-01 15:26:08.093447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.844 [2024-10-01 15:26:08.094849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.844 [2024-10-01 15:26:08.094885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:09.844 [2024-10-01 15:26:08.094899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.973 ms 00:26:09.844 [2024-10-01 15:26:08.094910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.844 [2024-10-01 15:26:08.095380] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:09.844 [2024-10-01 15:26:08.095407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.844 [2024-10-01 15:26:08.095428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:09.844 [2024-10-01 15:26:08.095440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.460 ms 00:26:09.844 [2024-10-01 15:26:08.095450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.844 [2024-10-01 15:26:08.095582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.844 [2024-10-01 15:26:08.095595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:09.844 [2024-10-01 15:26:08.095606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:09.844 [2024-10-01 15:26:08.095616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.844 [2024-10-01 15:26:08.095666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 552.244 ms, result 0 00:26:09.844 [2024-10-01 15:26:08.095711] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:09.844 [2024-10-01 15:26:08.095774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.844 [2024-10-01 15:26:08.095785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:09.844 [2024-10-01 15:26:08.095796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:26:09.844 [2024-10-01 15:26:08.095806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.619865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.619936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:10.105 [2024-10-01 15:26:08.619952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 524.563 ms 00:26:10.105 [2024-10-01 15:26:08.619964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.621398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.621434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:10.105 [2024-10-01 15:26:08.621447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.006 ms 00:26:10.105 [2024-10-01 15:26:08.621458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.621902] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:10.105 [2024-10-01 15:26:08.621927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.621938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:10.105 [2024-10-01 15:26:08.621949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.440 ms 00:26:10.105 [2024-10-01 15:26:08.621959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.621989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.622000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:10.105 [2024-10-01 15:26:08.622011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:10.105 [2024-10-01 15:26:08.622021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.622058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 527.202 ms, result 0 00:26:10.105 [2024-10-01 15:26:08.622101] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:10.105 [2024-10-01 15:26:08.622114] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:10.105 [2024-10-01 15:26:08.622128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.622139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:10.105 [2024-10-01 15:26:08.622149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1079.588 ms 00:26:10.105 [2024-10-01 15:26:08.622181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.622213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.622224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:10.105 [2024-10-01 15:26:08.622240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:10.105 [2024-10-01 15:26:08.622258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.629127] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:10.105 [2024-10-01 15:26:08.629262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.629275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:10.105 [2024-10-01 15:26:08.629287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.997 ms 00:26:10.105 [2024-10-01 15:26:08.629297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.629867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.629889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:10.105 [2024-10-01 15:26:08.629900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:26:10.105 [2024-10-01 15:26:08.629911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.631862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.631886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:10.105 [2024-10-01 15:26:08.631897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.934 ms 00:26:10.105 [2024-10-01 15:26:08.631908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.631972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.631984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:10.105 [2024-10-01 15:26:08.631995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:10.105 [2024-10-01 15:26:08.632005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.632106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.632119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:10.105 [2024-10-01 15:26:08.632130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:10.105 [2024-10-01 15:26:08.632141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.632167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.632190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:10.105 [2024-10-01 15:26:08.632201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:10.105 [2024-10-01 15:26:08.632211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.632244] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:10.105 [2024-10-01 15:26:08.632260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.632270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:10.105 [2024-10-01 15:26:08.632281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:10.105 [2024-10-01 15:26:08.632291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.632343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.105 [2024-10-01 15:26:08.632357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:10.105 [2024-10-01 15:26:08.632368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:26:10.105 [2024-10-01 15:26:08.632387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.105 [2024-10-01 15:26:08.633316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1169.660 ms, result 0 00:26:10.105 [2024-10-01 15:26:08.645653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.365 [2024-10-01 15:26:08.661649] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:10.365 [2024-10-01 15:26:08.669751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:10.934 Validate MD5 checksum, iteration 1 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:10.934 15:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:10.934 [2024-10-01 15:26:09.472044] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:26:10.934 [2024-10-01 15:26:09.472189] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91982 ] 00:26:11.194 [2024-10-01 15:26:09.638657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.194 [2024-10-01 15:26:09.686492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.077  Copying: 692/1024 [MB] (692 MBps) Copying: 1024/1024 [MB] (average 666 MBps) 00:26:14.077 00:26:14.077 15:26:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:14.077 15:26:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:16.000 Validate MD5 checksum, iteration 2 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d0a1005553f5719a71dc95c0e8e89b06 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d0a1005553f5719a71dc95c0e8e89b06 != \d\0\a\1\0\0\5\5\5\3\f\5\7\1\9\a\7\1\d\c\9\5\c\0\e\8\e\8\9\b\0\6 ]] 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:16.000 15:26:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:16.000 [2024-10-01 15:26:14.135942] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:26:16.000 [2024-10-01 15:26:14.136088] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92032 ] 00:26:16.000 [2024-10-01 15:26:14.303243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.000 [2024-10-01 15:26:14.351489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.531  Copying: 697/1024 [MB] (697 MBps) Copying: 1024/1024 [MB] (average 675 MBps) 00:26:18.531 00:26:18.531 15:26:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:18.531 15:26:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=81f51b8c55d7c70b6589b536555825a3 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 81f51b8c55d7c70b6589b536555825a3 != \8\1\f\5\1\b\8\c\5\5\d\7\c\7\0\b\6\5\8\9\b\5\3\6\5\5\5\8\2\5\a\3 ]] 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91947 ]] 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91947 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91947 ']' 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91947 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91947 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:20.446 killing process with pid 91947 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91947' 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91947 00:26:20.446 15:26:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91947 00:26:20.446 [2024-10-01 15:26:18.930930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:20.446 [2024-10-01 15:26:18.937647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.446 [2024-10-01 15:26:18.937696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:20.446 [2024-10-01 15:26:18.937712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:20.446 [2024-10-01 15:26:18.937724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.446 [2024-10-01 15:26:18.937748] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:20.446 [2024-10-01 15:26:18.938412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.446 [2024-10-01 15:26:18.938436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:20.446 [2024-10-01 15:26:18.938448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.650 ms 00:26:20.446 [2024-10-01 15:26:18.938466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.446 [2024-10-01 15:26:18.938702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.446 [2024-10-01 15:26:18.938722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:20.446 [2024-10-01 15:26:18.938734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:26:20.446 [2024-10-01 15:26:18.938745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.446 [2024-10-01 15:26:18.939896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.446 [2024-10-01 15:26:18.939933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:20.446 [2024-10-01 15:26:18.939946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.133 ms 00:26:20.446 [2024-10-01 15:26:18.939958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.446 [2024-10-01 15:26:18.941000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.446 [2024-10-01 15:26:18.941053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:20.446 [2024-10-01 15:26:18.941079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.009 ms 00:26:20.446 [2024-10-01 15:26:18.941089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.446 [2024-10-01 15:26:18.942390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.447 [2024-10-01 15:26:18.942438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:20.447 [2024-10-01 15:26:18.942467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.240 ms 00:26:20.447 [2024-10-01 15:26:18.942479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.943783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.447 [2024-10-01 15:26:18.943827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:20.447 [2024-10-01 15:26:18.943840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.273 ms 00:26:20.447 [2024-10-01 15:26:18.943851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.943924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.447 [2024-10-01 15:26:18.943937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:20.447 [2024-10-01 15:26:18.943949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:20.447 [2024-10-01 15:26:18.943960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.945065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.447 [2024-10-01 15:26:18.945104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:20.447 [2024-10-01 15:26:18.945117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.088 ms 00:26:20.447 [2024-10-01 15:26:18.945138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.946522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.447 [2024-10-01 15:26:18.946556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:20.447 [2024-10-01 15:26:18.946567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.356 ms 00:26:20.447 [2024-10-01 15:26:18.946577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.947774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.447 [2024-10-01 15:26:18.947810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:20.447 [2024-10-01 15:26:18.947821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.151 ms 00:26:20.447 [2024-10-01 15:26:18.947832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.949023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.447 [2024-10-01 15:26:18.949058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:20.447 [2024-10-01 15:26:18.949070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.138 ms 00:26:20.447 [2024-10-01 15:26:18.949081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.949110] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:20.447 [2024-10-01 15:26:18.949126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:20.447 [2024-10-01 15:26:18.949146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:20.447 [2024-10-01 15:26:18.949157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:20.447 [2024-10-01 15:26:18.949183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:20.447 [2024-10-01 15:26:18.949350] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:20.447 [2024-10-01 15:26:18.949361] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 05c080d4-a80f-4345-899f-f3776a4bb0d9 00:26:20.447 [2024-10-01 15:26:18.949373] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:20.447 [2024-10-01 15:26:18.949383] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:20.447 [2024-10-01 15:26:18.949392] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:20.447 [2024-10-01 15:26:18.949402] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:20.447 [2024-10-01 15:26:18.949412] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:20.447 [2024-10-01 15:26:18.949423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:20.447 [2024-10-01 15:26:18.949434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:20.447 [2024-10-01 15:26:18.949444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:20.447 [2024-10-01 15:26:18.949453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:20.447 [2024-10-01 15:26:18.949463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.447 [2024-10-01 15:26:18.949473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:20.447 [2024-10-01 15:26:18.949485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:26:20.447 [2024-10-01 15:26:18.949499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.951200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.447 [2024-10-01 15:26:18.951224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:20.447 [2024-10-01 15:26:18.951236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.684 ms 00:26:20.447 [2024-10-01 15:26:18.951246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.951351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:20.447 [2024-10-01 15:26:18.951363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:20.447 [2024-10-01 15:26:18.951379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:26:20.447 [2024-10-01 15:26:18.951389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.958430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.958471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:20.447 [2024-10-01 15:26:18.958485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.958495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.958532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.958542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:20.447 [2024-10-01 15:26:18.958560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.958569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.958636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.958650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:20.447 [2024-10-01 15:26:18.958661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.958671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.958691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.958702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:20.447 [2024-10-01 15:26:18.958712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.958727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.972536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.972607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:20.447 [2024-10-01 15:26:18.972620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.972631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.980835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.980883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:20.447 [2024-10-01 15:26:18.980904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.980916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.981000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.981013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:20.447 [2024-10-01 15:26:18.981024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.981035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.981072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.981084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:20.447 [2024-10-01 15:26:18.981100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.981110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.981205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.981219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:20.447 [2024-10-01 15:26:18.981230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.981240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.981289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.981302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:20.447 [2024-10-01 15:26:18.981312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.981322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.981368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.981379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:20.447 [2024-10-01 15:26:18.981397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.981407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.981453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:20.447 [2024-10-01 15:26:18.981465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:20.447 [2024-10-01 15:26:18.981475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:20.447 [2024-10-01 15:26:18.981485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:20.447 [2024-10-01 15:26:18.981620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 44.005 ms, result 0 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:21.011 Remove shared memory files 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid91747 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:21.011 ************************************ 00:26:21.011 END TEST ftl_upgrade_shutdown 00:26:21.011 ************************************ 00:26:21.011 00:26:21.011 real 1m8.714s 00:26:21.011 user 1m31.928s 00:26:21.011 sys 0m21.011s 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:21.011 15:26:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:21.011 15:26:19 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:26:21.011 15:26:19 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:21.011 15:26:19 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:26:21.011 15:26:19 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:21.011 15:26:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:21.011 ************************************ 00:26:21.011 START TEST ftl_restore_fast 00:26:21.011 ************************************ 00:26:21.011 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:21.011 * Looking for test storage... 00:26:21.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:21.011 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:21.011 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:21.011 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:21.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.270 --rc genhtml_branch_coverage=1 00:26:21.270 --rc genhtml_function_coverage=1 00:26:21.270 --rc genhtml_legend=1 00:26:21.270 --rc geninfo_all_blocks=1 00:26:21.270 --rc geninfo_unexecuted_blocks=1 00:26:21.270 00:26:21.270 ' 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:21.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.270 --rc genhtml_branch_coverage=1 00:26:21.270 --rc genhtml_function_coverage=1 00:26:21.270 --rc genhtml_legend=1 00:26:21.270 --rc geninfo_all_blocks=1 00:26:21.270 --rc geninfo_unexecuted_blocks=1 00:26:21.270 00:26:21.270 ' 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:21.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.270 --rc genhtml_branch_coverage=1 00:26:21.270 --rc genhtml_function_coverage=1 00:26:21.270 --rc genhtml_legend=1 00:26:21.270 --rc geninfo_all_blocks=1 00:26:21.270 --rc geninfo_unexecuted_blocks=1 00:26:21.270 00:26:21.270 ' 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:21.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.270 --rc genhtml_branch_coverage=1 00:26:21.270 --rc genhtml_function_coverage=1 00:26:21.270 --rc genhtml_legend=1 00:26:21.270 --rc geninfo_all_blocks=1 00:26:21.270 --rc geninfo_unexecuted_blocks=1 00:26:21.270 00:26:21.270 ' 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:21.270 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.VdsDZ9rpCV 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92170 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92170 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 92170 ']' 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:21.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:21.271 15:26:19 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:21.271 [2024-10-01 15:26:19.734343] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:26:21.271 [2024-10-01 15:26:19.734490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92170 ] 00:26:21.529 [2024-10-01 15:26:19.903781] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.529 [2024-10-01 15:26:19.953700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.093 15:26:20 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:22.093 15:26:20 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:26:22.093 15:26:20 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:22.093 15:26:20 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:26:22.093 15:26:20 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:22.093 15:26:20 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:26:22.093 15:26:20 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:26:22.093 15:26:20 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:22.352 15:26:20 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:22.352 15:26:20 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:26:22.352 15:26:20 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:22.352 15:26:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:26:22.352 15:26:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:22.352 15:26:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:22.352 15:26:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:22.352 15:26:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:22.611 { 00:26:22.611 "name": "nvme0n1", 00:26:22.611 "aliases": [ 00:26:22.611 "8df8d5a9-0779-41f5-bb39-213817c3e4ce" 00:26:22.611 ], 00:26:22.611 "product_name": "NVMe disk", 00:26:22.611 "block_size": 4096, 00:26:22.611 "num_blocks": 1310720, 00:26:22.611 "uuid": "8df8d5a9-0779-41f5-bb39-213817c3e4ce", 00:26:22.611 "numa_id": -1, 00:26:22.611 "assigned_rate_limits": { 00:26:22.611 "rw_ios_per_sec": 0, 00:26:22.611 "rw_mbytes_per_sec": 0, 00:26:22.611 "r_mbytes_per_sec": 0, 00:26:22.611 "w_mbytes_per_sec": 0 00:26:22.611 }, 00:26:22.611 "claimed": true, 00:26:22.611 "claim_type": "read_many_write_one", 00:26:22.611 "zoned": false, 00:26:22.611 "supported_io_types": { 00:26:22.611 "read": true, 00:26:22.611 "write": true, 00:26:22.611 "unmap": true, 00:26:22.611 "flush": true, 00:26:22.611 "reset": true, 00:26:22.611 "nvme_admin": true, 00:26:22.611 "nvme_io": true, 00:26:22.611 "nvme_io_md": false, 00:26:22.611 "write_zeroes": true, 00:26:22.611 "zcopy": false, 00:26:22.611 "get_zone_info": false, 00:26:22.611 "zone_management": false, 00:26:22.611 "zone_append": false, 00:26:22.611 "compare": true, 00:26:22.611 "compare_and_write": false, 00:26:22.611 "abort": true, 00:26:22.611 "seek_hole": false, 00:26:22.611 "seek_data": false, 00:26:22.611 "copy": true, 00:26:22.611 "nvme_iov_md": false 00:26:22.611 }, 00:26:22.611 "driver_specific": { 00:26:22.611 "nvme": [ 00:26:22.611 { 00:26:22.611 "pci_address": "0000:00:11.0", 00:26:22.611 "trid": { 00:26:22.611 "trtype": "PCIe", 00:26:22.611 "traddr": "0000:00:11.0" 00:26:22.611 }, 00:26:22.611 "ctrlr_data": { 00:26:22.611 "cntlid": 0, 00:26:22.611 "vendor_id": "0x1b36", 00:26:22.611 "model_number": "QEMU NVMe Ctrl", 00:26:22.611 "serial_number": "12341", 00:26:22.611 "firmware_revision": "8.0.0", 00:26:22.611 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:22.611 "oacs": { 00:26:22.611 "security": 0, 00:26:22.611 "format": 1, 00:26:22.611 "firmware": 0, 00:26:22.611 "ns_manage": 1 00:26:22.611 }, 00:26:22.611 "multi_ctrlr": false, 00:26:22.611 "ana_reporting": false 00:26:22.611 }, 00:26:22.611 "vs": { 00:26:22.611 "nvme_version": "1.4" 00:26:22.611 }, 00:26:22.611 "ns_data": { 00:26:22.611 "id": 1, 00:26:22.611 "can_share": false 00:26:22.611 } 00:26:22.611 } 00:26:22.611 ], 00:26:22.611 "mp_policy": "active_passive" 00:26:22.611 } 00:26:22.611 } 00:26:22.611 ]' 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:22.611 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:22.870 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=f2d4594f-c264-45c6-a2df-fe44fa33fa95 00:26:22.870 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:26:22.870 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f2d4594f-c264-45c6-a2df-fe44fa33fa95 00:26:23.129 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:23.388 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=06132b78-7f03-4545-9f63-20222a2fd68e 00:26:23.388 15:26:21 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 06132b78-7f03-4545-9f63-20222a2fd68e 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:23.647 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:23.938 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:23.938 { 00:26:23.938 "name": "89f498ef-4a97-4630-9efa-e484fbf4bb1a", 00:26:23.938 "aliases": [ 00:26:23.938 "lvs/nvme0n1p0" 00:26:23.938 ], 00:26:23.938 "product_name": "Logical Volume", 00:26:23.938 "block_size": 4096, 00:26:23.938 "num_blocks": 26476544, 00:26:23.938 "uuid": "89f498ef-4a97-4630-9efa-e484fbf4bb1a", 00:26:23.938 "assigned_rate_limits": { 00:26:23.938 "rw_ios_per_sec": 0, 00:26:23.938 "rw_mbytes_per_sec": 0, 00:26:23.938 "r_mbytes_per_sec": 0, 00:26:23.938 "w_mbytes_per_sec": 0 00:26:23.938 }, 00:26:23.938 "claimed": false, 00:26:23.938 "zoned": false, 00:26:23.938 "supported_io_types": { 00:26:23.938 "read": true, 00:26:23.938 "write": true, 00:26:23.938 "unmap": true, 00:26:23.938 "flush": false, 00:26:23.938 "reset": true, 00:26:23.938 "nvme_admin": false, 00:26:23.938 "nvme_io": false, 00:26:23.938 "nvme_io_md": false, 00:26:23.938 "write_zeroes": true, 00:26:23.938 "zcopy": false, 00:26:23.938 "get_zone_info": false, 00:26:23.938 "zone_management": false, 00:26:23.938 "zone_append": false, 00:26:23.938 "compare": false, 00:26:23.938 "compare_and_write": false, 00:26:23.938 "abort": false, 00:26:23.938 "seek_hole": true, 00:26:23.938 "seek_data": true, 00:26:23.938 "copy": false, 00:26:23.938 "nvme_iov_md": false 00:26:23.938 }, 00:26:23.938 "driver_specific": { 00:26:23.938 "lvol": { 00:26:23.938 "lvol_store_uuid": "06132b78-7f03-4545-9f63-20222a2fd68e", 00:26:23.938 "base_bdev": "nvme0n1", 00:26:23.938 "thin_provision": true, 00:26:23.938 "num_allocated_clusters": 0, 00:26:23.938 "snapshot": false, 00:26:23.938 "clone": false, 00:26:23.938 "esnap_clone": false 00:26:23.938 } 00:26:23.938 } 00:26:23.938 } 00:26:23.938 ]' 00:26:23.938 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:23.938 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:23.938 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:23.938 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:23.938 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:23.938 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:23.938 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:26:23.938 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:26:23.938 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:24.198 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:24.198 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:24.198 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:24.198 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:24.198 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:24.198 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:24.198 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:24.198 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:24.457 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:24.457 { 00:26:24.457 "name": "89f498ef-4a97-4630-9efa-e484fbf4bb1a", 00:26:24.457 "aliases": [ 00:26:24.457 "lvs/nvme0n1p0" 00:26:24.457 ], 00:26:24.457 "product_name": "Logical Volume", 00:26:24.457 "block_size": 4096, 00:26:24.457 "num_blocks": 26476544, 00:26:24.457 "uuid": "89f498ef-4a97-4630-9efa-e484fbf4bb1a", 00:26:24.457 "assigned_rate_limits": { 00:26:24.457 "rw_ios_per_sec": 0, 00:26:24.457 "rw_mbytes_per_sec": 0, 00:26:24.457 "r_mbytes_per_sec": 0, 00:26:24.457 "w_mbytes_per_sec": 0 00:26:24.457 }, 00:26:24.457 "claimed": false, 00:26:24.457 "zoned": false, 00:26:24.457 "supported_io_types": { 00:26:24.457 "read": true, 00:26:24.457 "write": true, 00:26:24.457 "unmap": true, 00:26:24.457 "flush": false, 00:26:24.457 "reset": true, 00:26:24.457 "nvme_admin": false, 00:26:24.457 "nvme_io": false, 00:26:24.457 "nvme_io_md": false, 00:26:24.457 "write_zeroes": true, 00:26:24.457 "zcopy": false, 00:26:24.457 "get_zone_info": false, 00:26:24.457 "zone_management": false, 00:26:24.457 "zone_append": false, 00:26:24.457 "compare": false, 00:26:24.457 "compare_and_write": false, 00:26:24.457 "abort": false, 00:26:24.457 "seek_hole": true, 00:26:24.457 "seek_data": true, 00:26:24.457 "copy": false, 00:26:24.457 "nvme_iov_md": false 00:26:24.457 }, 00:26:24.457 "driver_specific": { 00:26:24.457 "lvol": { 00:26:24.457 "lvol_store_uuid": "06132b78-7f03-4545-9f63-20222a2fd68e", 00:26:24.457 "base_bdev": "nvme0n1", 00:26:24.457 "thin_provision": true, 00:26:24.457 "num_allocated_clusters": 0, 00:26:24.457 "snapshot": false, 00:26:24.457 "clone": false, 00:26:24.457 "esnap_clone": false 00:26:24.457 } 00:26:24.457 } 00:26:24.457 } 00:26:24.457 ]' 00:26:24.457 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:24.457 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:24.457 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:24.457 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:24.457 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:24.457 15:26:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:24.457 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:26:24.457 15:26:22 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:24.716 15:26:23 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:24.716 15:26:23 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:24.716 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:24.716 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:24.716 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:24.716 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:24.716 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89f498ef-4a97-4630-9efa-e484fbf4bb1a 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:24.975 { 00:26:24.975 "name": "89f498ef-4a97-4630-9efa-e484fbf4bb1a", 00:26:24.975 "aliases": [ 00:26:24.975 "lvs/nvme0n1p0" 00:26:24.975 ], 00:26:24.975 "product_name": "Logical Volume", 00:26:24.975 "block_size": 4096, 00:26:24.975 "num_blocks": 26476544, 00:26:24.975 "uuid": "89f498ef-4a97-4630-9efa-e484fbf4bb1a", 00:26:24.975 "assigned_rate_limits": { 00:26:24.975 "rw_ios_per_sec": 0, 00:26:24.975 "rw_mbytes_per_sec": 0, 00:26:24.975 "r_mbytes_per_sec": 0, 00:26:24.975 "w_mbytes_per_sec": 0 00:26:24.975 }, 00:26:24.975 "claimed": false, 00:26:24.975 "zoned": false, 00:26:24.975 "supported_io_types": { 00:26:24.975 "read": true, 00:26:24.975 "write": true, 00:26:24.975 "unmap": true, 00:26:24.975 "flush": false, 00:26:24.975 "reset": true, 00:26:24.975 "nvme_admin": false, 00:26:24.975 "nvme_io": false, 00:26:24.975 "nvme_io_md": false, 00:26:24.975 "write_zeroes": true, 00:26:24.975 "zcopy": false, 00:26:24.975 "get_zone_info": false, 00:26:24.975 "zone_management": false, 00:26:24.975 "zone_append": false, 00:26:24.975 "compare": false, 00:26:24.975 "compare_and_write": false, 00:26:24.975 "abort": false, 00:26:24.975 "seek_hole": true, 00:26:24.975 "seek_data": true, 00:26:24.975 "copy": false, 00:26:24.975 "nvme_iov_md": false 00:26:24.975 }, 00:26:24.975 "driver_specific": { 00:26:24.975 "lvol": { 00:26:24.975 "lvol_store_uuid": "06132b78-7f03-4545-9f63-20222a2fd68e", 00:26:24.975 "base_bdev": "nvme0n1", 00:26:24.975 "thin_provision": true, 00:26:24.975 "num_allocated_clusters": 0, 00:26:24.975 "snapshot": false, 00:26:24.975 "clone": false, 00:26:24.975 "esnap_clone": false 00:26:24.975 } 00:26:24.975 } 00:26:24.975 } 00:26:24.975 ]' 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 89f498ef-4a97-4630-9efa-e484fbf4bb1a --l2p_dram_limit 10' 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:26:24.975 15:26:23 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 89f498ef-4a97-4630-9efa-e484fbf4bb1a --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:26:25.235 [2024-10-01 15:26:23.617557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.617624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:25.235 [2024-10-01 15:26:23.617643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:25.235 [2024-10-01 15:26:23.617657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.617724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.617739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.235 [2024-10-01 15:26:23.617750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:25.235 [2024-10-01 15:26:23.617766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.617797] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:25.235 [2024-10-01 15:26:23.618107] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:25.235 [2024-10-01 15:26:23.618143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.618157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.235 [2024-10-01 15:26:23.618184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:26:25.235 [2024-10-01 15:26:23.618198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.618354] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e5b3aa04-2dd4-4ab1-bb95-898329cbe42d 00:26:25.235 [2024-10-01 15:26:23.619838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.619865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:25.235 [2024-10-01 15:26:23.619882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:25.235 [2024-10-01 15:26:23.619892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.627369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.627405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.235 [2024-10-01 15:26:23.627422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.437 ms 00:26:25.235 [2024-10-01 15:26:23.627434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.627524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.627536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.235 [2024-10-01 15:26:23.627559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:25.235 [2024-10-01 15:26:23.627572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.627655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.627674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:25.235 [2024-10-01 15:26:23.627687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:25.235 [2024-10-01 15:26:23.627697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.627728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:25.235 [2024-10-01 15:26:23.629550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.629584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.235 [2024-10-01 15:26:23.629607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.838 ms 00:26:25.235 [2024-10-01 15:26:23.629620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.629656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.629670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:25.235 [2024-10-01 15:26:23.629681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:25.235 [2024-10-01 15:26:23.629696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.629715] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:25.235 [2024-10-01 15:26:23.629852] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:25.235 [2024-10-01 15:26:23.629867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:25.235 [2024-10-01 15:26:23.629883] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:25.235 [2024-10-01 15:26:23.629896] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:25.235 [2024-10-01 15:26:23.629911] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:25.235 [2024-10-01 15:26:23.629922] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:25.235 [2024-10-01 15:26:23.629948] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:25.235 [2024-10-01 15:26:23.629957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:25.235 [2024-10-01 15:26:23.629970] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:25.235 [2024-10-01 15:26:23.629982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.629995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:25.235 [2024-10-01 15:26:23.630006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:26:25.235 [2024-10-01 15:26:23.630018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.630090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.235 [2024-10-01 15:26:23.630106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:25.235 [2024-10-01 15:26:23.630117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:25.235 [2024-10-01 15:26:23.630129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.235 [2024-10-01 15:26:23.630231] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:25.235 [2024-10-01 15:26:23.630250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:25.235 [2024-10-01 15:26:23.630268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:25.235 [2024-10-01 15:26:23.630281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.235 [2024-10-01 15:26:23.630291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:25.236 [2024-10-01 15:26:23.630303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:25.236 [2024-10-01 15:26:23.630324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:25.236 [2024-10-01 15:26:23.630334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:25.236 [2024-10-01 15:26:23.630356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:25.236 [2024-10-01 15:26:23.630368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:25.236 [2024-10-01 15:26:23.630377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:25.236 [2024-10-01 15:26:23.630398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:25.236 [2024-10-01 15:26:23.630408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:25.236 [2024-10-01 15:26:23.630421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:25.236 [2024-10-01 15:26:23.630443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:25.236 [2024-10-01 15:26:23.630452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:25.236 [2024-10-01 15:26:23.630473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.236 [2024-10-01 15:26:23.630494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:25.236 [2024-10-01 15:26:23.630506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.236 [2024-10-01 15:26:23.630526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:25.236 [2024-10-01 15:26:23.630536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.236 [2024-10-01 15:26:23.630556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:25.236 [2024-10-01 15:26:23.630570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.236 [2024-10-01 15:26:23.630590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:25.236 [2024-10-01 15:26:23.630600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:25.236 [2024-10-01 15:26:23.630620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:25.236 [2024-10-01 15:26:23.630633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:25.236 [2024-10-01 15:26:23.630642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:25.236 [2024-10-01 15:26:23.630654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:25.236 [2024-10-01 15:26:23.630663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:25.236 [2024-10-01 15:26:23.630674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:25.236 [2024-10-01 15:26:23.630695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:25.236 [2024-10-01 15:26:23.630704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630715] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:25.236 [2024-10-01 15:26:23.630725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:25.236 [2024-10-01 15:26:23.630740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:25.236 [2024-10-01 15:26:23.630750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.236 [2024-10-01 15:26:23.630763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:25.236 [2024-10-01 15:26:23.630773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:25.236 [2024-10-01 15:26:23.630785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:25.236 [2024-10-01 15:26:23.630795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:25.236 [2024-10-01 15:26:23.630806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:25.236 [2024-10-01 15:26:23.630816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:25.236 [2024-10-01 15:26:23.630833] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:25.236 [2024-10-01 15:26:23.630845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.236 [2024-10-01 15:26:23.630859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:25.236 [2024-10-01 15:26:23.630869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:25.236 [2024-10-01 15:26:23.630882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:25.236 [2024-10-01 15:26:23.630892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:25.236 [2024-10-01 15:26:23.630905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:25.236 [2024-10-01 15:26:23.630916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:25.236 [2024-10-01 15:26:23.630933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:25.236 [2024-10-01 15:26:23.630943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:25.236 [2024-10-01 15:26:23.630956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:25.236 [2024-10-01 15:26:23.630966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:25.236 [2024-10-01 15:26:23.630979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:25.236 [2024-10-01 15:26:23.630989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:25.236 [2024-10-01 15:26:23.631001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:25.236 [2024-10-01 15:26:23.631012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:25.236 [2024-10-01 15:26:23.631024] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:25.236 [2024-10-01 15:26:23.631039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.236 [2024-10-01 15:26:23.631060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:25.236 [2024-10-01 15:26:23.631070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:25.236 [2024-10-01 15:26:23.631083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:25.236 [2024-10-01 15:26:23.631093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:25.236 [2024-10-01 15:26:23.631107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.236 [2024-10-01 15:26:23.631117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:25.236 [2024-10-01 15:26:23.631133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:26:25.236 [2024-10-01 15:26:23.631142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.236 [2024-10-01 15:26:23.631199] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:25.236 [2024-10-01 15:26:23.631213] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:28.542 [2024-10-01 15:26:26.656754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.656824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:28.543 [2024-10-01 15:26:26.656849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3030.459 ms 00:26:28.543 [2024-10-01 15:26:26.656861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.668236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.668297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.543 [2024-10-01 15:26:26.668331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.287 ms 00:26:28.543 [2024-10-01 15:26:26.668342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.668482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.668501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:28.543 [2024-10-01 15:26:26.668519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:26:28.543 [2024-10-01 15:26:26.668529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.679131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.679202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.543 [2024-10-01 15:26:26.679221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.533 ms 00:26:28.543 [2024-10-01 15:26:26.679232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.679276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.679295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.543 [2024-10-01 15:26:26.679314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:28.543 [2024-10-01 15:26:26.679324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.679820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.679842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.543 [2024-10-01 15:26:26.679856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:26:28.543 [2024-10-01 15:26:26.679867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.679977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.679988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.543 [2024-10-01 15:26:26.680002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:28.543 [2024-10-01 15:26:26.680014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.698342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.698407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.543 [2024-10-01 15:26:26.698433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.329 ms 00:26:28.543 [2024-10-01 15:26:26.698449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.708609] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:28.543 [2024-10-01 15:26:26.711865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.711905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:28.543 [2024-10-01 15:26:26.711919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.303 ms 00:26:28.543 [2024-10-01 15:26:26.711945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.771961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.772037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:28.543 [2024-10-01 15:26:26.772054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.072 ms 00:26:28.543 [2024-10-01 15:26:26.772070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.772272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.772290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:28.543 [2024-10-01 15:26:26.772301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:26:28.543 [2024-10-01 15:26:26.772315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.775767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.775810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:28.543 [2024-10-01 15:26:26.775824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:26:28.543 [2024-10-01 15:26:26.775837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.778636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.778675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:28.543 [2024-10-01 15:26:26.778689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.763 ms 00:26:28.543 [2024-10-01 15:26:26.778702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.778982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.778999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:28.543 [2024-10-01 15:26:26.779010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:26:28.543 [2024-10-01 15:26:26.779026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.812546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.812610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:28.543 [2024-10-01 15:26:26.812641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.550 ms 00:26:28.543 [2024-10-01 15:26:26.812655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.817292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.817340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:28.543 [2024-10-01 15:26:26.817355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.543 ms 00:26:28.543 [2024-10-01 15:26:26.817369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.820749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.820787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:28.543 [2024-10-01 15:26:26.820801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.346 ms 00:26:28.543 [2024-10-01 15:26:26.820813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.824520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.824561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:28.543 [2024-10-01 15:26:26.824574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:26:28.543 [2024-10-01 15:26:26.824590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.824634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.824648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:28.543 [2024-10-01 15:26:26.824660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:28.543 [2024-10-01 15:26:26.824672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.824744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.543 [2024-10-01 15:26:26.824759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:28.543 [2024-10-01 15:26:26.824769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:28.543 [2024-10-01 15:26:26.824793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.543 [2024-10-01 15:26:26.825825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3213.093 ms, result 0 00:26:28.543 { 00:26:28.543 "name": "ftl0", 00:26:28.543 "uuid": "e5b3aa04-2dd4-4ab1-bb95-898329cbe42d" 00:26:28.543 } 00:26:28.543 15:26:26 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:28.543 15:26:26 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:28.543 15:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:26:28.543 15:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:28.803 [2024-10-01 15:26:27.249870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.803 [2024-10-01 15:26:27.249941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:28.803 [2024-10-01 15:26:27.249962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:28.803 [2024-10-01 15:26:27.249973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.803 [2024-10-01 15:26:27.250003] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:28.803 [2024-10-01 15:26:27.250719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.803 [2024-10-01 15:26:27.250749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:28.803 [2024-10-01 15:26:27.250762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:26:28.803 [2024-10-01 15:26:27.250775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.803 [2024-10-01 15:26:27.251001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.803 [2024-10-01 15:26:27.251022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:28.803 [2024-10-01 15:26:27.251032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:26:28.803 [2024-10-01 15:26:27.251054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.803 [2024-10-01 15:26:27.253577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.803 [2024-10-01 15:26:27.253607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:28.803 [2024-10-01 15:26:27.253620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.508 ms 00:26:28.803 [2024-10-01 15:26:27.253633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.803 [2024-10-01 15:26:27.258626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.803 [2024-10-01 15:26:27.258664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:28.804 [2024-10-01 15:26:27.258677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.981 ms 00:26:28.804 [2024-10-01 15:26:27.258690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.804 [2024-10-01 15:26:27.260326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.804 [2024-10-01 15:26:27.260376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:28.804 [2024-10-01 15:26:27.260389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:26:28.804 [2024-10-01 15:26:27.260404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.804 [2024-10-01 15:26:27.265050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.804 [2024-10-01 15:26:27.265099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:28.804 [2024-10-01 15:26:27.265112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.618 ms 00:26:28.804 [2024-10-01 15:26:27.265134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.804 [2024-10-01 15:26:27.265260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.804 [2024-10-01 15:26:27.265277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:28.804 [2024-10-01 15:26:27.265288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:28.804 [2024-10-01 15:26:27.265300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.804 [2024-10-01 15:26:27.267115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.804 [2024-10-01 15:26:27.267158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:28.804 [2024-10-01 15:26:27.267181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.796 ms 00:26:28.804 [2024-10-01 15:26:27.267194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.804 [2024-10-01 15:26:27.268529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.804 [2024-10-01 15:26:27.268571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:28.804 [2024-10-01 15:26:27.268583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:26:28.804 [2024-10-01 15:26:27.268595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.804 [2024-10-01 15:26:27.269842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.804 [2024-10-01 15:26:27.269882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:28.804 [2024-10-01 15:26:27.269894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:26:28.804 [2024-10-01 15:26:27.269906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.804 [2024-10-01 15:26:27.271246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.804 [2024-10-01 15:26:27.271285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:28.804 [2024-10-01 15:26:27.271296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:26:28.804 [2024-10-01 15:26:27.271308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.804 [2024-10-01 15:26:27.271340] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:28.804 [2024-10-01 15:26:27.271360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.271988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:28.804 [2024-10-01 15:26:27.272157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:28.805 [2024-10-01 15:26:27.272611] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:28.805 [2024-10-01 15:26:27.272622] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5b3aa04-2dd4-4ab1-bb95-898329cbe42d 00:26:28.805 [2024-10-01 15:26:27.272644] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:28.805 [2024-10-01 15:26:27.272653] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:28.805 [2024-10-01 15:26:27.272673] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:28.805 [2024-10-01 15:26:27.272684] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:28.805 [2024-10-01 15:26:27.272696] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:28.805 [2024-10-01 15:26:27.272706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:28.805 [2024-10-01 15:26:27.272719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:28.805 [2024-10-01 15:26:27.272728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:28.805 [2024-10-01 15:26:27.272740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:28.805 [2024-10-01 15:26:27.272750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.805 [2024-10-01 15:26:27.272763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:28.805 [2024-10-01 15:26:27.272776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:26:28.805 [2024-10-01 15:26:27.272789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.274614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.805 [2024-10-01 15:26:27.274646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:28.805 [2024-10-01 15:26:27.274658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.807 ms 00:26:28.805 [2024-10-01 15:26:27.274678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.274793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.805 [2024-10-01 15:26:27.274809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:28.805 [2024-10-01 15:26:27.274820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:26:28.805 [2024-10-01 15:26:27.274832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.281942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.281984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.805 [2024-10-01 15:26:27.281997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.282011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.282073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.282088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.805 [2024-10-01 15:26:27.282099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.282111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.282189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.282210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.805 [2024-10-01 15:26:27.282220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.282233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.282253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.282267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.805 [2024-10-01 15:26:27.282280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.282292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.296176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.296240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.805 [2024-10-01 15:26:27.296254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.296268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.305745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.305802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.805 [2024-10-01 15:26:27.305816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.305833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.305922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.305940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:28.805 [2024-10-01 15:26:27.305951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.305963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.306001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.306015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:28.805 [2024-10-01 15:26:27.306026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.306041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.306134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.306150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:28.805 [2024-10-01 15:26:27.306161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.306229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.306272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.306287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:28.805 [2024-10-01 15:26:27.306298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.306314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.306359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.306378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:28.805 [2024-10-01 15:26:27.306388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.805 [2024-10-01 15:26:27.306401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.805 [2024-10-01 15:26:27.306463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.805 [2024-10-01 15:26:27.306478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:28.805 [2024-10-01 15:26:27.306488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.806 [2024-10-01 15:26:27.306503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.806 [2024-10-01 15:26:27.306631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.831 ms, result 0 00:26:28.806 true 00:26:28.806 15:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92170 00:26:28.806 15:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92170 ']' 00:26:28.806 15:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92170 00:26:28.806 15:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:26:28.806 15:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:28.806 15:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92170 00:26:29.065 15:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:29.065 killing process with pid 92170 00:26:29.065 15:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:29.065 15:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92170' 00:26:29.065 15:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 92170 00:26:29.065 15:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 92170 00:26:32.348 15:26:30 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:36.530 262144+0 records in 00:26:36.530 262144+0 records out 00:26:36.530 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.07741 s, 263 MB/s 00:26:36.530 15:26:34 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:37.909 15:26:36 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:37.909 [2024-10-01 15:26:36.118141] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:26:37.909 [2024-10-01 15:26:36.118302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92367 ] 00:26:37.909 [2024-10-01 15:26:36.285699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.909 [2024-10-01 15:26:36.336216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.909 [2024-10-01 15:26:36.439944] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.909 [2024-10-01 15:26:36.440022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:38.169 [2024-10-01 15:26:36.598775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.169 [2024-10-01 15:26:36.598840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:38.169 [2024-10-01 15:26:36.598866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:38.169 [2024-10-01 15:26:36.598877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.169 [2024-10-01 15:26:36.598941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.169 [2024-10-01 15:26:36.598954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:38.169 [2024-10-01 15:26:36.598965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:38.169 [2024-10-01 15:26:36.598975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.169 [2024-10-01 15:26:36.598997] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:38.169 [2024-10-01 15:26:36.599361] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:38.169 [2024-10-01 15:26:36.599397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.169 [2024-10-01 15:26:36.599407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:38.169 [2024-10-01 15:26:36.599430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:26:38.169 [2024-10-01 15:26:36.599440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.169 [2024-10-01 15:26:36.601055] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:38.169 [2024-10-01 15:26:36.603556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.169 [2024-10-01 15:26:36.603589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:38.169 [2024-10-01 15:26:36.603603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.498 ms 00:26:38.169 [2024-10-01 15:26:36.603620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.169 [2024-10-01 15:26:36.603701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.169 [2024-10-01 15:26:36.603715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:38.169 [2024-10-01 15:26:36.603727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:38.169 [2024-10-01 15:26:36.603740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.169 [2024-10-01 15:26:36.610537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.169 [2024-10-01 15:26:36.610583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:38.169 [2024-10-01 15:26:36.610596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.745 ms 00:26:38.169 [2024-10-01 15:26:36.610611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.169 [2024-10-01 15:26:36.610721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.169 [2024-10-01 15:26:36.610734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:38.169 [2024-10-01 15:26:36.610752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:38.169 [2024-10-01 15:26:36.610762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.169 [2024-10-01 15:26:36.610820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.169 [2024-10-01 15:26:36.610838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:38.169 [2024-10-01 15:26:36.610850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:38.169 [2024-10-01 15:26:36.610867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.169 [2024-10-01 15:26:36.610901] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:38.169 [2024-10-01 15:26:36.612560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.169 [2024-10-01 15:26:36.612582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:38.169 [2024-10-01 15:26:36.612594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:26:38.169 [2024-10-01 15:26:36.612604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.169 [2024-10-01 15:26:36.612636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.169 [2024-10-01 15:26:36.612647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:38.169 [2024-10-01 15:26:36.612658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:38.169 [2024-10-01 15:26:36.612679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.169 [2024-10-01 15:26:36.612707] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:38.170 [2024-10-01 15:26:36.612730] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:38.170 [2024-10-01 15:26:36.612768] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:38.170 [2024-10-01 15:26:36.612786] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:38.170 [2024-10-01 15:26:36.612883] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:38.170 [2024-10-01 15:26:36.612896] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:38.170 [2024-10-01 15:26:36.612909] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:38.170 [2024-10-01 15:26:36.612926] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:38.170 [2024-10-01 15:26:36.612938] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:38.170 [2024-10-01 15:26:36.612949] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:38.170 [2024-10-01 15:26:36.612959] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:38.170 [2024-10-01 15:26:36.612969] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:38.170 [2024-10-01 15:26:36.612978] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:38.170 [2024-10-01 15:26:36.612989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.170 [2024-10-01 15:26:36.612999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:38.170 [2024-10-01 15:26:36.613009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:26:38.170 [2024-10-01 15:26:36.613018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.170 [2024-10-01 15:26:36.613089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.170 [2024-10-01 15:26:36.613107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:38.170 [2024-10-01 15:26:36.613117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:38.170 [2024-10-01 15:26:36.613127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.170 [2024-10-01 15:26:36.613240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:38.170 [2024-10-01 15:26:36.613261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:38.170 [2024-10-01 15:26:36.613280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.170 [2024-10-01 15:26:36.613300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:38.170 [2024-10-01 15:26:36.613321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:38.170 [2024-10-01 15:26:36.613341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:38.170 [2024-10-01 15:26:36.613350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.170 [2024-10-01 15:26:36.613368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:38.170 [2024-10-01 15:26:36.613379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:38.170 [2024-10-01 15:26:36.613388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.170 [2024-10-01 15:26:36.613401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:38.170 [2024-10-01 15:26:36.613411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:38.170 [2024-10-01 15:26:36.613420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:38.170 [2024-10-01 15:26:36.613438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:38.170 [2024-10-01 15:26:36.613447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:38.170 [2024-10-01 15:26:36.613465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.170 [2024-10-01 15:26:36.613482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:38.170 [2024-10-01 15:26:36.613491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.170 [2024-10-01 15:26:36.613509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:38.170 [2024-10-01 15:26:36.613518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.170 [2024-10-01 15:26:36.613536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:38.170 [2024-10-01 15:26:36.613553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.170 [2024-10-01 15:26:36.613571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:38.170 [2024-10-01 15:26:36.613580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.170 [2024-10-01 15:26:36.613598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:38.170 [2024-10-01 15:26:36.613607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:38.170 [2024-10-01 15:26:36.613618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.170 [2024-10-01 15:26:36.613627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:38.170 [2024-10-01 15:26:36.613636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:38.170 [2024-10-01 15:26:36.613645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:38.170 [2024-10-01 15:26:36.613663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:38.170 [2024-10-01 15:26:36.613672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613681] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:38.170 [2024-10-01 15:26:36.613691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:38.170 [2024-10-01 15:26:36.613714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.170 [2024-10-01 15:26:36.613724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.170 [2024-10-01 15:26:36.613734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:38.170 [2024-10-01 15:26:36.613743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:38.170 [2024-10-01 15:26:36.613752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:38.170 [2024-10-01 15:26:36.613761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:38.170 [2024-10-01 15:26:36.613770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:38.170 [2024-10-01 15:26:36.613779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:38.170 [2024-10-01 15:26:36.613789] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:38.170 [2024-10-01 15:26:36.613801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.170 [2024-10-01 15:26:36.613819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:38.170 [2024-10-01 15:26:36.613829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:38.170 [2024-10-01 15:26:36.613839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:38.170 [2024-10-01 15:26:36.613849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:38.170 [2024-10-01 15:26:36.613859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:38.170 [2024-10-01 15:26:36.613869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:38.170 [2024-10-01 15:26:36.613882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:38.170 [2024-10-01 15:26:36.613892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:38.170 [2024-10-01 15:26:36.613902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:38.170 [2024-10-01 15:26:36.613912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:38.170 [2024-10-01 15:26:36.613922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:38.170 [2024-10-01 15:26:36.613932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:38.170 [2024-10-01 15:26:36.613942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:38.170 [2024-10-01 15:26:36.613955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:38.170 [2024-10-01 15:26:36.613970] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:38.170 [2024-10-01 15:26:36.613981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.170 [2024-10-01 15:26:36.613995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:38.170 [2024-10-01 15:26:36.614009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:38.170 [2024-10-01 15:26:36.614023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:38.170 [2024-10-01 15:26:36.614037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:38.170 [2024-10-01 15:26:36.614048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.170 [2024-10-01 15:26:36.614061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:38.170 [2024-10-01 15:26:36.614078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:26:38.170 [2024-10-01 15:26:36.614088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.636580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.636644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:38.171 [2024-10-01 15:26:36.636672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.451 ms 00:26:38.171 [2024-10-01 15:26:36.636687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.636821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.636836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:38.171 [2024-10-01 15:26:36.636851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:38.171 [2024-10-01 15:26:36.636865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.647934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.647982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:38.171 [2024-10-01 15:26:36.647998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.981 ms 00:26:38.171 [2024-10-01 15:26:36.648009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.648064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.648077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:38.171 [2024-10-01 15:26:36.648088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:38.171 [2024-10-01 15:26:36.648098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.648615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.648643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:38.171 [2024-10-01 15:26:36.648654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:26:38.171 [2024-10-01 15:26:36.648664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.648783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.648796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:38.171 [2024-10-01 15:26:36.648806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:26:38.171 [2024-10-01 15:26:36.648816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.654793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.654841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:38.171 [2024-10-01 15:26:36.654854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.964 ms 00:26:38.171 [2024-10-01 15:26:36.654873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.657493] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:38.171 [2024-10-01 15:26:36.657533] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:38.171 [2024-10-01 15:26:36.657552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.657564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:38.171 [2024-10-01 15:26:36.657577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:26:38.171 [2024-10-01 15:26:36.657587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.671040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.671110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:38.171 [2024-10-01 15:26:36.671134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.429 ms 00:26:38.171 [2024-10-01 15:26:36.671145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.673626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.673664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:38.171 [2024-10-01 15:26:36.673677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.414 ms 00:26:38.171 [2024-10-01 15:26:36.673687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.675089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.675118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:38.171 [2024-10-01 15:26:36.675130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:26:38.171 [2024-10-01 15:26:36.675140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.675467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.675491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:38.171 [2024-10-01 15:26:36.675503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:26:38.171 [2024-10-01 15:26:36.675513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.695382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.695455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:38.171 [2024-10-01 15:26:36.695476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.852 ms 00:26:38.171 [2024-10-01 15:26:36.695487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.702185] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:38.171 [2024-10-01 15:26:36.705539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.705571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:38.171 [2024-10-01 15:26:36.705586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.007 ms 00:26:38.171 [2024-10-01 15:26:36.705603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.705707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.705723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:38.171 [2024-10-01 15:26:36.705734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:38.171 [2024-10-01 15:26:36.705749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.705836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.705848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:38.171 [2024-10-01 15:26:36.705875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:38.171 [2024-10-01 15:26:36.705884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.705913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.705924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:38.171 [2024-10-01 15:26:36.705934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:38.171 [2024-10-01 15:26:36.705944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.705982] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:38.171 [2024-10-01 15:26:36.706001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.706011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:38.171 [2024-10-01 15:26:36.706024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:38.171 [2024-10-01 15:26:36.706034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.709740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.709775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:38.171 [2024-10-01 15:26:36.709788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.689 ms 00:26:38.171 [2024-10-01 15:26:36.709798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.709868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.171 [2024-10-01 15:26:36.709880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:38.171 [2024-10-01 15:26:36.709891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:38.171 [2024-10-01 15:26:36.709901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.171 [2024-10-01 15:26:36.711024] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.005 ms, result 0 00:27:13.675  Copying: 26/1024 [MB] (26 MBps) Copying: 55/1024 [MB] (28 MBps) Copying: 83/1024 [MB] (28 MBps) Copying: 121/1024 [MB] (37 MBps) Copying: 151/1024 [MB] (30 MBps) Copying: 178/1024 [MB] (27 MBps) Copying: 205/1024 [MB] (26 MBps) Copying: 233/1024 [MB] (28 MBps) Copying: 261/1024 [MB] (27 MBps) Copying: 288/1024 [MB] (27 MBps) Copying: 317/1024 [MB] (28 MBps) Copying: 346/1024 [MB] (29 MBps) Copying: 376/1024 [MB] (30 MBps) Copying: 406/1024 [MB] (29 MBps) Copying: 433/1024 [MB] (27 MBps) Copying: 460/1024 [MB] (27 MBps) Copying: 489/1024 [MB] (28 MBps) Copying: 517/1024 [MB] (28 MBps) Copying: 545/1024 [MB] (28 MBps) Copying: 574/1024 [MB] (28 MBps) Copying: 603/1024 [MB] (28 MBps) Copying: 632/1024 [MB] (28 MBps) Copying: 661/1024 [MB] (29 MBps) Copying: 692/1024 [MB] (30 MBps) Copying: 720/1024 [MB] (28 MBps) Copying: 748/1024 [MB] (27 MBps) Copying: 777/1024 [MB] (29 MBps) Copying: 809/1024 [MB] (31 MBps) Copying: 839/1024 [MB] (30 MBps) Copying: 869/1024 [MB] (29 MBps) Copying: 899/1024 [MB] (30 MBps) Copying: 927/1024 [MB] (28 MBps) Copying: 956/1024 [MB] (28 MBps) Copying: 984/1024 [MB] (27 MBps) Copying: 1012/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-10-01 15:27:12.097357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.675 [2024-10-01 15:27:12.097418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:13.675 [2024-10-01 15:27:12.097435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:13.675 [2024-10-01 15:27:12.097446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.675 [2024-10-01 15:27:12.097473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:13.675 [2024-10-01 15:27:12.098149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.675 [2024-10-01 15:27:12.098162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:13.675 [2024-10-01 15:27:12.098189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:27:13.675 [2024-10-01 15:27:12.098200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.675 [2024-10-01 15:27:12.099880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.675 [2024-10-01 15:27:12.099918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:13.675 [2024-10-01 15:27:12.099930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:27:13.675 [2024-10-01 15:27:12.099940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.675 [2024-10-01 15:27:12.099975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.675 [2024-10-01 15:27:12.099987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:13.675 [2024-10-01 15:27:12.099998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:13.675 [2024-10-01 15:27:12.100008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.675 [2024-10-01 15:27:12.100055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.675 [2024-10-01 15:27:12.100066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:13.675 [2024-10-01 15:27:12.100076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:13.675 [2024-10-01 15:27:12.100085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.675 [2024-10-01 15:27:12.100099] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:13.675 [2024-10-01 15:27:12.100114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:13.675 [2024-10-01 15:27:12.100841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.100991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:13.676 [2024-10-01 15:27:12.101179] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:13.676 [2024-10-01 15:27:12.101193] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5b3aa04-2dd4-4ab1-bb95-898329cbe42d 00:27:13.676 [2024-10-01 15:27:12.101209] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:13.676 [2024-10-01 15:27:12.101218] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:13.676 [2024-10-01 15:27:12.101228] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:13.676 [2024-10-01 15:27:12.101238] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:13.676 [2024-10-01 15:27:12.101248] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:13.676 [2024-10-01 15:27:12.101265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:13.676 [2024-10-01 15:27:12.101275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:13.676 [2024-10-01 15:27:12.101283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:13.676 [2024-10-01 15:27:12.101292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:13.676 [2024-10-01 15:27:12.101302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.676 [2024-10-01 15:27:12.101312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:13.676 [2024-10-01 15:27:12.101322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:27:13.676 [2024-10-01 15:27:12.101332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.103021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.676 [2024-10-01 15:27:12.103045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:13.676 [2024-10-01 15:27:12.103057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:27:13.676 [2024-10-01 15:27:12.103072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.103184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.676 [2024-10-01 15:27:12.103195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:13.676 [2024-10-01 15:27:12.103209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:27:13.676 [2024-10-01 15:27:12.103220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.109249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.109283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:13.676 [2024-10-01 15:27:12.109295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.109306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.109360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.109372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:13.676 [2024-10-01 15:27:12.109390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.109408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.109451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.109464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:13.676 [2024-10-01 15:27:12.109474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.109485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.109502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.109512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:13.676 [2024-10-01 15:27:12.109522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.109531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.122919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.122973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:13.676 [2024-10-01 15:27:12.122986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.122997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.131104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.131164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:13.676 [2024-10-01 15:27:12.131219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.131234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.131289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.131301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:13.676 [2024-10-01 15:27:12.131312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.131322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.131346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.131357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:13.676 [2024-10-01 15:27:12.131367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.131377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.131441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.131456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:13.676 [2024-10-01 15:27:12.131466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.131477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.131505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.131516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:13.676 [2024-10-01 15:27:12.131526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.131544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.131585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.131597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:13.676 [2024-10-01 15:27:12.131607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.131617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.131658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.676 [2024-10-01 15:27:12.131669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:13.676 [2024-10-01 15:27:12.131679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.676 [2024-10-01 15:27:12.131689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.676 [2024-10-01 15:27:12.131810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 34.470 ms, result 0 00:27:14.244 00:27:14.244 00:27:14.503 15:27:12 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:14.503 [2024-10-01 15:27:12.902001] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:27:14.503 [2024-10-01 15:27:12.902148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92737 ] 00:27:14.762 [2024-10-01 15:27:13.069709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.762 [2024-10-01 15:27:13.119487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.762 [2024-10-01 15:27:13.223284] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:14.762 [2024-10-01 15:27:13.223370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:15.023 [2024-10-01 15:27:13.382529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.023 [2024-10-01 15:27:13.382596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:15.023 [2024-10-01 15:27:13.382631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:15.023 [2024-10-01 15:27:13.382642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.023 [2024-10-01 15:27:13.382714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.023 [2024-10-01 15:27:13.382728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:15.023 [2024-10-01 15:27:13.382739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:15.023 [2024-10-01 15:27:13.382756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.023 [2024-10-01 15:27:13.382786] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:15.023 [2024-10-01 15:27:13.383044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:15.023 [2024-10-01 15:27:13.383063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.023 [2024-10-01 15:27:13.383074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:15.023 [2024-10-01 15:27:13.383088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:27:15.023 [2024-10-01 15:27:13.383098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.023 [2024-10-01 15:27:13.383497] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:15.023 [2024-10-01 15:27:13.383543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.023 [2024-10-01 15:27:13.383554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:15.023 [2024-10-01 15:27:13.383574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:15.023 [2024-10-01 15:27:13.383583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.023 [2024-10-01 15:27:13.383638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.023 [2024-10-01 15:27:13.383650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:15.023 [2024-10-01 15:27:13.383663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:15.023 [2024-10-01 15:27:13.383680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.023 [2024-10-01 15:27:13.384102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.023 [2024-10-01 15:27:13.384125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:15.023 [2024-10-01 15:27:13.384143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:27:15.023 [2024-10-01 15:27:13.384153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.023 [2024-10-01 15:27:13.384268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.023 [2024-10-01 15:27:13.384288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:15.023 [2024-10-01 15:27:13.384298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:27:15.023 [2024-10-01 15:27:13.384308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.023 [2024-10-01 15:27:13.384333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.024 [2024-10-01 15:27:13.384343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:15.024 [2024-10-01 15:27:13.384353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:15.024 [2024-10-01 15:27:13.384372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.024 [2024-10-01 15:27:13.384393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:15.024 [2024-10-01 15:27:13.386191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.024 [2024-10-01 15:27:13.386214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:15.024 [2024-10-01 15:27:13.386230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:27:15.024 [2024-10-01 15:27:13.386240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.024 [2024-10-01 15:27:13.386268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.024 [2024-10-01 15:27:13.386278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:15.024 [2024-10-01 15:27:13.386289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:15.024 [2024-10-01 15:27:13.386298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.024 [2024-10-01 15:27:13.386320] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:15.024 [2024-10-01 15:27:13.386341] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:15.024 [2024-10-01 15:27:13.386387] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:15.024 [2024-10-01 15:27:13.386413] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:15.024 [2024-10-01 15:27:13.386499] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:15.024 [2024-10-01 15:27:13.386512] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:15.024 [2024-10-01 15:27:13.386525] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:15.024 [2024-10-01 15:27:13.386537] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:15.024 [2024-10-01 15:27:13.386549] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:15.024 [2024-10-01 15:27:13.386563] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:15.024 [2024-10-01 15:27:13.386576] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:15.024 [2024-10-01 15:27:13.386586] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:15.024 [2024-10-01 15:27:13.386595] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:15.024 [2024-10-01 15:27:13.386605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.024 [2024-10-01 15:27:13.386615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:15.024 [2024-10-01 15:27:13.386625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:27:15.024 [2024-10-01 15:27:13.386635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.024 [2024-10-01 15:27:13.386718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.024 [2024-10-01 15:27:13.386730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:15.024 [2024-10-01 15:27:13.386740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:15.024 [2024-10-01 15:27:13.386754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.024 [2024-10-01 15:27:13.386843] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:15.024 [2024-10-01 15:27:13.386863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:15.024 [2024-10-01 15:27:13.386873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:15.024 [2024-10-01 15:27:13.386884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.024 [2024-10-01 15:27:13.386898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:15.024 [2024-10-01 15:27:13.386916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:15.024 [2024-10-01 15:27:13.386926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:15.024 [2024-10-01 15:27:13.386935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:15.024 [2024-10-01 15:27:13.386944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:15.024 [2024-10-01 15:27:13.386953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:15.024 [2024-10-01 15:27:13.386964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:15.024 [2024-10-01 15:27:13.386974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:15.024 [2024-10-01 15:27:13.386983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:15.024 [2024-10-01 15:27:13.386992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:15.024 [2024-10-01 15:27:13.387002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:15.024 [2024-10-01 15:27:13.387010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.024 [2024-10-01 15:27:13.387019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:15.024 [2024-10-01 15:27:13.387028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:15.024 [2024-10-01 15:27:13.387037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.024 [2024-10-01 15:27:13.387047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:15.024 [2024-10-01 15:27:13.387059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:15.024 [2024-10-01 15:27:13.387068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.024 [2024-10-01 15:27:13.387077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:15.024 [2024-10-01 15:27:13.387086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:15.024 [2024-10-01 15:27:13.387095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.024 [2024-10-01 15:27:13.387104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:15.024 [2024-10-01 15:27:13.387113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:15.024 [2024-10-01 15:27:13.387122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.024 [2024-10-01 15:27:13.387131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:15.024 [2024-10-01 15:27:13.387140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:15.024 [2024-10-01 15:27:13.387149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.024 [2024-10-01 15:27:13.387158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:15.024 [2024-10-01 15:27:13.387167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:15.024 [2024-10-01 15:27:13.387189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:15.024 [2024-10-01 15:27:13.387199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:15.024 [2024-10-01 15:27:13.387208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:15.024 [2024-10-01 15:27:13.387225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:15.024 [2024-10-01 15:27:13.387234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:15.024 [2024-10-01 15:27:13.387244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:15.024 [2024-10-01 15:27:13.387253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.024 [2024-10-01 15:27:13.387262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:15.024 [2024-10-01 15:27:13.387271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:15.024 [2024-10-01 15:27:13.387281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.024 [2024-10-01 15:27:13.387290] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:15.024 [2024-10-01 15:27:13.387300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:15.024 [2024-10-01 15:27:13.387309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:15.024 [2024-10-01 15:27:13.387319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.024 [2024-10-01 15:27:13.387332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:15.024 [2024-10-01 15:27:13.387341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:15.024 [2024-10-01 15:27:13.387350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:15.024 [2024-10-01 15:27:13.387360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:15.024 [2024-10-01 15:27:13.387369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:15.024 [2024-10-01 15:27:13.387382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:15.024 [2024-10-01 15:27:13.387392] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:15.024 [2024-10-01 15:27:13.387404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.024 [2024-10-01 15:27:13.387415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:15.024 [2024-10-01 15:27:13.387425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:15.024 [2024-10-01 15:27:13.387435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:15.024 [2024-10-01 15:27:13.387445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:15.024 [2024-10-01 15:27:13.387455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:15.024 [2024-10-01 15:27:13.387465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:15.024 [2024-10-01 15:27:13.387475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:15.024 [2024-10-01 15:27:13.387485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:15.024 [2024-10-01 15:27:13.387495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:15.024 [2024-10-01 15:27:13.387505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:15.024 [2024-10-01 15:27:13.387515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:15.024 [2024-10-01 15:27:13.387525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:15.024 [2024-10-01 15:27:13.387543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:15.025 [2024-10-01 15:27:13.387559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:15.025 [2024-10-01 15:27:13.387570] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:15.025 [2024-10-01 15:27:13.387581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.025 [2024-10-01 15:27:13.387592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:15.025 [2024-10-01 15:27:13.387603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:15.025 [2024-10-01 15:27:13.387613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:15.025 [2024-10-01 15:27:13.387623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:15.025 [2024-10-01 15:27:13.387634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.387643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:15.025 [2024-10-01 15:27:13.387653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:27:15.025 [2024-10-01 15:27:13.387662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.404824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.404869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:15.025 [2024-10-01 15:27:13.404901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.133 ms 00:27:15.025 [2024-10-01 15:27:13.404913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.404997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.405009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:15.025 [2024-10-01 15:27:13.405020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:15.025 [2024-10-01 15:27:13.405031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.416236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.416292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:15.025 [2024-10-01 15:27:13.416307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.151 ms 00:27:15.025 [2024-10-01 15:27:13.416328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.416376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.416391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:15.025 [2024-10-01 15:27:13.416404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:15.025 [2024-10-01 15:27:13.416416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.416534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.416549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:15.025 [2024-10-01 15:27:13.416566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:15.025 [2024-10-01 15:27:13.416578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.416705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.416721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:15.025 [2024-10-01 15:27:13.416733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:27:15.025 [2024-10-01 15:27:13.416749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.422775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.422821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:15.025 [2024-10-01 15:27:13.422833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.013 ms 00:27:15.025 [2024-10-01 15:27:13.422848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.422964] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:15.025 [2024-10-01 15:27:13.422981] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:15.025 [2024-10-01 15:27:13.422993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.423012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:15.025 [2024-10-01 15:27:13.423024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:15.025 [2024-10-01 15:27:13.423034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.434253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.434294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:15.025 [2024-10-01 15:27:13.434307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.217 ms 00:27:15.025 [2024-10-01 15:27:13.434317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.434438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.434450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:15.025 [2024-10-01 15:27:13.434461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:15.025 [2024-10-01 15:27:13.434475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.434529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.434553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:15.025 [2024-10-01 15:27:13.434568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:15.025 [2024-10-01 15:27:13.434577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.434872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.434899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:15.025 [2024-10-01 15:27:13.434910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:27:15.025 [2024-10-01 15:27:13.434920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.434944] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:15.025 [2024-10-01 15:27:13.434966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.434976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:15.025 [2024-10-01 15:27:13.434997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:15.025 [2024-10-01 15:27:13.435007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.442307] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:15.025 [2024-10-01 15:27:13.442500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.442514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:15.025 [2024-10-01 15:27:13.442533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.485 ms 00:27:15.025 [2024-10-01 15:27:13.442543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.444673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.444705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:15.025 [2024-10-01 15:27:13.444724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:27:15.025 [2024-10-01 15:27:13.444734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.444815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.444827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:15.025 [2024-10-01 15:27:13.444839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:15.025 [2024-10-01 15:27:13.444848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.444897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.444909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:15.025 [2024-10-01 15:27:13.444919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:15.025 [2024-10-01 15:27:13.444928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.444964] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:15.025 [2024-10-01 15:27:13.444975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.444996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:15.025 [2024-10-01 15:27:13.445013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:15.025 [2024-10-01 15:27:13.445023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.449016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.449066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:15.025 [2024-10-01 15:27:13.449080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.981 ms 00:27:15.025 [2024-10-01 15:27:13.449090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.449156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.025 [2024-10-01 15:27:13.449179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:15.025 [2024-10-01 15:27:13.449191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:15.025 [2024-10-01 15:27:13.449200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.025 [2024-10-01 15:27:13.450238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 67.383 ms, result 0 00:27:48.493  Copying: 29/1024 [MB] (29 MBps) Copying: 59/1024 [MB] (30 MBps) Copying: 89/1024 [MB] (29 MBps) Copying: 119/1024 [MB] (30 MBps) Copying: 149/1024 [MB] (30 MBps) Copying: 179/1024 [MB] (30 MBps) Copying: 212/1024 [MB] (32 MBps) Copying: 242/1024 [MB] (30 MBps) Copying: 272/1024 [MB] (30 MBps) Copying: 302/1024 [MB] (30 MBps) Copying: 332/1024 [MB] (29 MBps) Copying: 361/1024 [MB] (29 MBps) Copying: 391/1024 [MB] (30 MBps) Copying: 421/1024 [MB] (30 MBps) Copying: 453/1024 [MB] (31 MBps) Copying: 484/1024 [MB] (31 MBps) Copying: 515/1024 [MB] (30 MBps) Copying: 546/1024 [MB] (30 MBps) Copying: 578/1024 [MB] (31 MBps) Copying: 610/1024 [MB] (32 MBps) Copying: 641/1024 [MB] (30 MBps) Copying: 672/1024 [MB] (31 MBps) Copying: 703/1024 [MB] (30 MBps) Copying: 738/1024 [MB] (35 MBps) Copying: 773/1024 [MB] (35 MBps) Copying: 803/1024 [MB] (30 MBps) Copying: 835/1024 [MB] (32 MBps) Copying: 864/1024 [MB] (28 MBps) Copying: 894/1024 [MB] (29 MBps) Copying: 924/1024 [MB] (30 MBps) Copying: 957/1024 [MB] (32 MBps) Copying: 988/1024 [MB] (31 MBps) Copying: 1019/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-10-01 15:27:46.836281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.493 [2024-10-01 15:27:46.836360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:48.493 [2024-10-01 15:27:46.836378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:48.493 [2024-10-01 15:27:46.836390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.493 [2024-10-01 15:27:46.836416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:48.493 [2024-10-01 15:27:46.837288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.493 [2024-10-01 15:27:46.837319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:48.493 [2024-10-01 15:27:46.837339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:27:48.493 [2024-10-01 15:27:46.837350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.493 [2024-10-01 15:27:46.837558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.493 [2024-10-01 15:27:46.837579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:48.493 [2024-10-01 15:27:46.837591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:27:48.493 [2024-10-01 15:27:46.837610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.493 [2024-10-01 15:27:46.837641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.493 [2024-10-01 15:27:46.837657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:48.493 [2024-10-01 15:27:46.837669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:48.493 [2024-10-01 15:27:46.837680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.493 [2024-10-01 15:27:46.837736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.493 [2024-10-01 15:27:46.837748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:48.493 [2024-10-01 15:27:46.837758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:48.493 [2024-10-01 15:27:46.837769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.493 [2024-10-01 15:27:46.837786] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:48.493 [2024-10-01 15:27:46.837808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.837993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:48.493 [2024-10-01 15:27:46.838317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.838997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:48.494 [2024-10-01 15:27:46.839183] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:48.494 [2024-10-01 15:27:46.839195] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5b3aa04-2dd4-4ab1-bb95-898329cbe42d 00:27:48.494 [2024-10-01 15:27:46.839212] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:48.494 [2024-10-01 15:27:46.839224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:48.494 [2024-10-01 15:27:46.839236] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:48.494 [2024-10-01 15:27:46.839247] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:48.494 [2024-10-01 15:27:46.839262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:48.494 [2024-10-01 15:27:46.839273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:48.494 [2024-10-01 15:27:46.839284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:48.494 [2024-10-01 15:27:46.839294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:48.494 [2024-10-01 15:27:46.839303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:48.494 [2024-10-01 15:27:46.839314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.494 [2024-10-01 15:27:46.839325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:48.494 [2024-10-01 15:27:46.839353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:27:48.494 [2024-10-01 15:27:46.839364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.494 [2024-10-01 15:27:46.841548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.494 [2024-10-01 15:27:46.841583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:48.494 [2024-10-01 15:27:46.841614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:27:48.494 [2024-10-01 15:27:46.841625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.494 [2024-10-01 15:27:46.841739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.494 [2024-10-01 15:27:46.841751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:48.494 [2024-10-01 15:27:46.841763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:48.494 [2024-10-01 15:27:46.841779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.494 [2024-10-01 15:27:46.849109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.494 [2024-10-01 15:27:46.849157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:48.494 [2024-10-01 15:27:46.849181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.494 [2024-10-01 15:27:46.849192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.494 [2024-10-01 15:27:46.849255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.494 [2024-10-01 15:27:46.849267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:48.494 [2024-10-01 15:27:46.849278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.494 [2024-10-01 15:27:46.849293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.494 [2024-10-01 15:27:46.849362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.494 [2024-10-01 15:27:46.849375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:48.494 [2024-10-01 15:27:46.849387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.494 [2024-10-01 15:27:46.849399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.494 [2024-10-01 15:27:46.849416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.494 [2024-10-01 15:27:46.849427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:48.494 [2024-10-01 15:27:46.849437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.495 [2024-10-01 15:27:46.849456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.495 [2024-10-01 15:27:46.863851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.495 [2024-10-01 15:27:46.863934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:48.495 [2024-10-01 15:27:46.863950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.495 [2024-10-01 15:27:46.863961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.495 [2024-10-01 15:27:46.873579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.495 [2024-10-01 15:27:46.873629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:48.495 [2024-10-01 15:27:46.873644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.495 [2024-10-01 15:27:46.873655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.495 [2024-10-01 15:27:46.873727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.495 [2024-10-01 15:27:46.873738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:48.495 [2024-10-01 15:27:46.873749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.495 [2024-10-01 15:27:46.873759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.495 [2024-10-01 15:27:46.873784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.495 [2024-10-01 15:27:46.873795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:48.495 [2024-10-01 15:27:46.873805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.495 [2024-10-01 15:27:46.873815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.495 [2024-10-01 15:27:46.873877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.495 [2024-10-01 15:27:46.873891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:48.495 [2024-10-01 15:27:46.873902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.495 [2024-10-01 15:27:46.873911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.495 [2024-10-01 15:27:46.873940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.495 [2024-10-01 15:27:46.873962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:48.495 [2024-10-01 15:27:46.873972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.495 [2024-10-01 15:27:46.873982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.495 [2024-10-01 15:27:46.874019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.495 [2024-10-01 15:27:46.874033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:48.495 [2024-10-01 15:27:46.874043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.495 [2024-10-01 15:27:46.874061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.495 [2024-10-01 15:27:46.874114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.495 [2024-10-01 15:27:46.874126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:48.495 [2024-10-01 15:27:46.874136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.495 [2024-10-01 15:27:46.874154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.495 [2024-10-01 15:27:46.874287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 38.045 ms, result 0 00:27:48.754 00:27:48.754 00:27:48.754 15:27:47 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:50.657 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:50.657 15:27:48 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:50.657 [2024-10-01 15:27:48.976275] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:27:50.657 [2024-10-01 15:27:48.976420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93099 ] 00:27:50.657 [2024-10-01 15:27:49.144466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.657 [2024-10-01 15:27:49.195502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.915 [2024-10-01 15:27:49.299261] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:50.915 [2024-10-01 15:27:49.299331] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:50.915 [2024-10-01 15:27:49.459191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.915 [2024-10-01 15:27:49.459245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:50.915 [2024-10-01 15:27:49.459264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:50.915 [2024-10-01 15:27:49.459275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.915 [2024-10-01 15:27:49.459335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.915 [2024-10-01 15:27:49.459349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:50.915 [2024-10-01 15:27:49.459367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:50.915 [2024-10-01 15:27:49.459377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.915 [2024-10-01 15:27:49.459398] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:51.175 [2024-10-01 15:27:49.459704] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:51.175 [2024-10-01 15:27:49.459730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.175 [2024-10-01 15:27:49.459741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:51.175 [2024-10-01 15:27:49.459756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:27:51.175 [2024-10-01 15:27:49.459766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.175 [2024-10-01 15:27:49.460166] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:51.175 [2024-10-01 15:27:49.460209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.175 [2024-10-01 15:27:49.460221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:51.175 [2024-10-01 15:27:49.460232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:51.175 [2024-10-01 15:27:49.460249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.175 [2024-10-01 15:27:49.460307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.175 [2024-10-01 15:27:49.460325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:51.175 [2024-10-01 15:27:49.460339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:51.175 [2024-10-01 15:27:49.460348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.175 [2024-10-01 15:27:49.460750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.175 [2024-10-01 15:27:49.460769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:51.175 [2024-10-01 15:27:49.460781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:27:51.175 [2024-10-01 15:27:49.460790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.175 [2024-10-01 15:27:49.460875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.175 [2024-10-01 15:27:49.460891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:51.175 [2024-10-01 15:27:49.460901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:51.175 [2024-10-01 15:27:49.460911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.175 [2024-10-01 15:27:49.460935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.175 [2024-10-01 15:27:49.460946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:51.175 [2024-10-01 15:27:49.460956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:51.175 [2024-10-01 15:27:49.460973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.175 [2024-10-01 15:27:49.461002] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:51.175 [2024-10-01 15:27:49.462791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.175 [2024-10-01 15:27:49.462811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:51.175 [2024-10-01 15:27:49.462826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.803 ms 00:27:51.175 [2024-10-01 15:27:49.462836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.175 [2024-10-01 15:27:49.462863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.176 [2024-10-01 15:27:49.462874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:51.176 [2024-10-01 15:27:49.462884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:51.176 [2024-10-01 15:27:49.462895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.176 [2024-10-01 15:27:49.462917] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:51.176 [2024-10-01 15:27:49.462947] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:51.176 [2024-10-01 15:27:49.462983] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:51.176 [2024-10-01 15:27:49.463000] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:51.176 [2024-10-01 15:27:49.463096] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:51.176 [2024-10-01 15:27:49.463109] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:51.176 [2024-10-01 15:27:49.463122] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:51.176 [2024-10-01 15:27:49.463134] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:51.176 [2024-10-01 15:27:49.463154] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:51.176 [2024-10-01 15:27:49.463165] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:51.176 [2024-10-01 15:27:49.463197] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:51.176 [2024-10-01 15:27:49.463208] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:51.176 [2024-10-01 15:27:49.463217] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:51.176 [2024-10-01 15:27:49.463227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.176 [2024-10-01 15:27:49.463244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:51.176 [2024-10-01 15:27:49.463254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:27:51.176 [2024-10-01 15:27:49.463271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.176 [2024-10-01 15:27:49.463341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.176 [2024-10-01 15:27:49.463352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:51.176 [2024-10-01 15:27:49.463362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:51.176 [2024-10-01 15:27:49.463376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.176 [2024-10-01 15:27:49.463484] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:51.176 [2024-10-01 15:27:49.463497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:51.176 [2024-10-01 15:27:49.463516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:51.176 [2024-10-01 15:27:49.463538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:51.176 [2024-10-01 15:27:49.463573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:51.176 [2024-10-01 15:27:49.463592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:51.176 [2024-10-01 15:27:49.463601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:51.176 [2024-10-01 15:27:49.463619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:51.176 [2024-10-01 15:27:49.463628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:51.176 [2024-10-01 15:27:49.463638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:51.176 [2024-10-01 15:27:49.463647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:51.176 [2024-10-01 15:27:49.463656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:51.176 [2024-10-01 15:27:49.463665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:51.176 [2024-10-01 15:27:49.463683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:51.176 [2024-10-01 15:27:49.463692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:51.176 [2024-10-01 15:27:49.463714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.176 [2024-10-01 15:27:49.463732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:51.176 [2024-10-01 15:27:49.463741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.176 [2024-10-01 15:27:49.463759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:51.176 [2024-10-01 15:27:49.463768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.176 [2024-10-01 15:27:49.463785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:51.176 [2024-10-01 15:27:49.463794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:51.176 [2024-10-01 15:27:49.463812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:51.176 [2024-10-01 15:27:49.463820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:51.176 [2024-10-01 15:27:49.463840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:51.176 [2024-10-01 15:27:49.463849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:51.176 [2024-10-01 15:27:49.463863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:51.176 [2024-10-01 15:27:49.463872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:51.176 [2024-10-01 15:27:49.463881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:51.176 [2024-10-01 15:27:49.463890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:51.176 [2024-10-01 15:27:49.463908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:51.176 [2024-10-01 15:27:49.463917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463926] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:51.176 [2024-10-01 15:27:49.463936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:51.176 [2024-10-01 15:27:49.463946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:51.176 [2024-10-01 15:27:49.463955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:51.176 [2024-10-01 15:27:49.463965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:51.176 [2024-10-01 15:27:49.463974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:51.176 [2024-10-01 15:27:49.463983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:51.176 [2024-10-01 15:27:49.463992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:51.176 [2024-10-01 15:27:49.464001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:51.176 [2024-10-01 15:27:49.464013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:51.176 [2024-10-01 15:27:49.464024] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:51.176 [2024-10-01 15:27:49.464039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.176 [2024-10-01 15:27:49.464050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:51.176 [2024-10-01 15:27:49.464060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:51.176 [2024-10-01 15:27:49.464070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:51.176 [2024-10-01 15:27:49.464080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:51.176 [2024-10-01 15:27:49.464090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:51.176 [2024-10-01 15:27:49.464100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:51.176 [2024-10-01 15:27:49.464110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:51.176 [2024-10-01 15:27:49.464120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:51.176 [2024-10-01 15:27:49.464130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:51.176 [2024-10-01 15:27:49.464139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:51.176 [2024-10-01 15:27:49.464150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:51.176 [2024-10-01 15:27:49.464161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:51.176 [2024-10-01 15:27:49.464182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:51.176 [2024-10-01 15:27:49.464196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:51.176 [2024-10-01 15:27:49.464207] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:51.176 [2024-10-01 15:27:49.464225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.176 [2024-10-01 15:27:49.464236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:51.176 [2024-10-01 15:27:49.464246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:51.176 [2024-10-01 15:27:49.464257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:51.176 [2024-10-01 15:27:49.464267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:51.177 [2024-10-01 15:27:49.464278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.464288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:51.177 [2024-10-01 15:27:49.464298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:27:51.177 [2024-10-01 15:27:49.464307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.481751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.481791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:51.177 [2024-10-01 15:27:49.481809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.417 ms 00:27:51.177 [2024-10-01 15:27:49.481819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.481900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.481911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:51.177 [2024-10-01 15:27:49.481922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:51.177 [2024-10-01 15:27:49.481932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.493393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.493438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:51.177 [2024-10-01 15:27:49.493459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.417 ms 00:27:51.177 [2024-10-01 15:27:49.493472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.493515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.493529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:51.177 [2024-10-01 15:27:49.493543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:51.177 [2024-10-01 15:27:49.493556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.493688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.493704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:51.177 [2024-10-01 15:27:49.493718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:51.177 [2024-10-01 15:27:49.493735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.493872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.493890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:51.177 [2024-10-01 15:27:49.493903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:27:51.177 [2024-10-01 15:27:49.493919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.500049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.500085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:51.177 [2024-10-01 15:27:49.500108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.113 ms 00:27:51.177 [2024-10-01 15:27:49.500118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.500256] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:51.177 [2024-10-01 15:27:49.500274] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:51.177 [2024-10-01 15:27:49.500286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.500307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:51.177 [2024-10-01 15:27:49.500317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:51.177 [2024-10-01 15:27:49.500328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.511556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.511591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:51.177 [2024-10-01 15:27:49.511603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.227 ms 00:27:51.177 [2024-10-01 15:27:49.511614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.511733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.511745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:51.177 [2024-10-01 15:27:49.511755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:51.177 [2024-10-01 15:27:49.511765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.511825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.511837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:51.177 [2024-10-01 15:27:49.511848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:51.177 [2024-10-01 15:27:49.511861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.512163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.512194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:51.177 [2024-10-01 15:27:49.512205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:27:51.177 [2024-10-01 15:27:49.512215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.512237] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:51.177 [2024-10-01 15:27:49.512250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.512262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:51.177 [2024-10-01 15:27:49.512272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:51.177 [2024-10-01 15:27:49.512285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.519712] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:51.177 [2024-10-01 15:27:49.519898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.519923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:51.177 [2024-10-01 15:27:49.519934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.605 ms 00:27:51.177 [2024-10-01 15:27:49.519944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.521994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.522026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:51.177 [2024-10-01 15:27:49.522037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:27:51.177 [2024-10-01 15:27:49.522047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.522131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.522144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:51.177 [2024-10-01 15:27:49.522155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:51.177 [2024-10-01 15:27:49.522165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.522223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.522247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:51.177 [2024-10-01 15:27:49.522258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:51.177 [2024-10-01 15:27:49.522274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.522311] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:51.177 [2024-10-01 15:27:49.522323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.522336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:51.177 [2024-10-01 15:27:49.522347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:51.177 [2024-10-01 15:27:49.522356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.526536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.526568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:51.177 [2024-10-01 15:27:49.526586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.161 ms 00:27:51.177 [2024-10-01 15:27:49.526597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.526667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.177 [2024-10-01 15:27:49.526680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:51.177 [2024-10-01 15:27:49.526691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:51.177 [2024-10-01 15:27:49.526700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.177 [2024-10-01 15:27:49.527800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 68.277 ms, result 0 00:28:29.354  Copying: 28/1024 [MB] (28 MBps) Copying: 56/1024 [MB] (28 MBps) Copying: 86/1024 [MB] (29 MBps) Copying: 114/1024 [MB] (28 MBps) Copying: 144/1024 [MB] (29 MBps) Copying: 173/1024 [MB] (28 MBps) Copying: 202/1024 [MB] (28 MBps) Copying: 229/1024 [MB] (27 MBps) Copying: 258/1024 [MB] (28 MBps) Copying: 286/1024 [MB] (28 MBps) Copying: 317/1024 [MB] (30 MBps) Copying: 345/1024 [MB] (28 MBps) Copying: 374/1024 [MB] (28 MBps) Copying: 402/1024 [MB] (28 MBps) Copying: 430/1024 [MB] (28 MBps) Copying: 459/1024 [MB] (28 MBps) Copying: 486/1024 [MB] (27 MBps) Copying: 513/1024 [MB] (27 MBps) Copying: 541/1024 [MB] (27 MBps) Copying: 568/1024 [MB] (26 MBps) Copying: 595/1024 [MB] (26 MBps) Copying: 620/1024 [MB] (25 MBps) Copying: 646/1024 [MB] (25 MBps) Copying: 672/1024 [MB] (25 MBps) Copying: 698/1024 [MB] (26 MBps) Copying: 724/1024 [MB] (26 MBps) Copying: 750/1024 [MB] (26 MBps) Copying: 778/1024 [MB] (27 MBps) Copying: 803/1024 [MB] (25 MBps) Copying: 829/1024 [MB] (26 MBps) Copying: 856/1024 [MB] (26 MBps) Copying: 884/1024 [MB] (27 MBps) Copying: 911/1024 [MB] (27 MBps) Copying: 939/1024 [MB] (27 MBps) Copying: 966/1024 [MB] (27 MBps) Copying: 993/1024 [MB] (26 MBps) Copying: 1019/1024 [MB] (26 MBps) Copying: 1048568/1048576 [kB] (4556 kBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-01 15:28:27.492746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.354 [2024-10-01 15:28:27.492826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:29.354 [2024-10-01 15:28:27.492844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:29.354 [2024-10-01 15:28:27.492855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.354 [2024-10-01 15:28:27.494546] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:29.354 [2024-10-01 15:28:27.496702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.354 [2024-10-01 15:28:27.496738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:29.354 [2024-10-01 15:28:27.496753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.109 ms 00:28:29.354 [2024-10-01 15:28:27.496763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.354 [2024-10-01 15:28:27.506344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.354 [2024-10-01 15:28:27.506386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:29.355 [2024-10-01 15:28:27.506406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.518 ms 00:28:29.355 [2024-10-01 15:28:27.506417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.355 [2024-10-01 15:28:27.506446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.355 [2024-10-01 15:28:27.506458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:29.355 [2024-10-01 15:28:27.506479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:29.355 [2024-10-01 15:28:27.506489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.355 [2024-10-01 15:28:27.506541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.355 [2024-10-01 15:28:27.506552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:29.355 [2024-10-01 15:28:27.506562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:29.355 [2024-10-01 15:28:27.506575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.355 [2024-10-01 15:28:27.506590] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:29.355 [2024-10-01 15:28:27.506603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125952 / 261120 wr_cnt: 1 state: open 00:28:29.355 [2024-10-01 15:28:27.506622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.506993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:29.355 [2024-10-01 15:28:27.507403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:29.356 [2024-10-01 15:28:27.507701] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:29.356 [2024-10-01 15:28:27.507711] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5b3aa04-2dd4-4ab1-bb95-898329cbe42d 00:28:29.356 [2024-10-01 15:28:27.507726] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125952 00:28:29.356 [2024-10-01 15:28:27.507743] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125984 00:28:29.356 [2024-10-01 15:28:27.507753] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125952 00:28:29.356 [2024-10-01 15:28:27.507763] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:28:29.356 [2024-10-01 15:28:27.507773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:29.356 [2024-10-01 15:28:27.507783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:29.356 [2024-10-01 15:28:27.507796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:29.356 [2024-10-01 15:28:27.507805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:29.356 [2024-10-01 15:28:27.507814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:29.356 [2024-10-01 15:28:27.507824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.356 [2024-10-01 15:28:27.507834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:29.356 [2024-10-01 15:28:27.507843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:28:29.356 [2024-10-01 15:28:27.507862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.509637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.356 [2024-10-01 15:28:27.509662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:29.356 [2024-10-01 15:28:27.509673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:28:29.356 [2024-10-01 15:28:27.509683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.509797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.356 [2024-10-01 15:28:27.509816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:29.356 [2024-10-01 15:28:27.509826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:28:29.356 [2024-10-01 15:28:27.509836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.515826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.515856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:29.356 [2024-10-01 15:28:27.515868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.515884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.515940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.515952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:29.356 [2024-10-01 15:28:27.515962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.515971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.516027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.516041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:29.356 [2024-10-01 15:28:27.516051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.516061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.516083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.516094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:29.356 [2024-10-01 15:28:27.516104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.516114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.529778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.529831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:29.356 [2024-10-01 15:28:27.529844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.529862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.539208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.539255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:29.356 [2024-10-01 15:28:27.539270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.539280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.539334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.539345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:29.356 [2024-10-01 15:28:27.539355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.539366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.539401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.539413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:29.356 [2024-10-01 15:28:27.539423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.539433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.539494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.539516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:29.356 [2024-10-01 15:28:27.539537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.539547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.539576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.539600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:29.356 [2024-10-01 15:28:27.539611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.539621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.356 [2024-10-01 15:28:27.539657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.356 [2024-10-01 15:28:27.539668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:29.356 [2024-10-01 15:28:27.539685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.356 [2024-10-01 15:28:27.539695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.357 [2024-10-01 15:28:27.539741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.357 [2024-10-01 15:28:27.539753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:29.357 [2024-10-01 15:28:27.539763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.357 [2024-10-01 15:28:27.539773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.357 [2024-10-01 15:28:27.539892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 48.314 ms, result 0 00:28:29.616 00:28:29.616 00:28:29.875 15:28:28 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:29.875 [2024-10-01 15:28:28.254602] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:28:29.875 [2024-10-01 15:28:28.254745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93491 ] 00:28:30.134 [2024-10-01 15:28:28.424359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.134 [2024-10-01 15:28:28.476697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.134 [2024-10-01 15:28:28.587239] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:30.134 [2024-10-01 15:28:28.587331] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:30.394 [2024-10-01 15:28:28.747472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.747544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:30.394 [2024-10-01 15:28:28.747571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:30.394 [2024-10-01 15:28:28.747581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.747651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.747667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:30.394 [2024-10-01 15:28:28.747678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:30.394 [2024-10-01 15:28:28.747688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.747710] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:30.394 [2024-10-01 15:28:28.748024] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:30.394 [2024-10-01 15:28:28.748045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.748055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:30.394 [2024-10-01 15:28:28.748066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:28:30.394 [2024-10-01 15:28:28.748083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.748621] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:30.394 [2024-10-01 15:28:28.748651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.748666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:30.394 [2024-10-01 15:28:28.748684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:30.394 [2024-10-01 15:28:28.748695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.748749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.748761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:30.394 [2024-10-01 15:28:28.748774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:30.394 [2024-10-01 15:28:28.748784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.749191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.749217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:30.394 [2024-10-01 15:28:28.749228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:28:30.394 [2024-10-01 15:28:28.749238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.749327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.749348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:30.394 [2024-10-01 15:28:28.749358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:30.394 [2024-10-01 15:28:28.749368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.749399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.749410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:30.394 [2024-10-01 15:28:28.749420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:30.394 [2024-10-01 15:28:28.749430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.749453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:30.394 [2024-10-01 15:28:28.751263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.751293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:30.394 [2024-10-01 15:28:28.751304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.818 ms 00:28:30.394 [2024-10-01 15:28:28.751314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.751345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.751357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:30.394 [2024-10-01 15:28:28.751367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:30.394 [2024-10-01 15:28:28.751377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.751417] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:30.394 [2024-10-01 15:28:28.751446] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:30.394 [2024-10-01 15:28:28.751484] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:30.394 [2024-10-01 15:28:28.751501] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:30.394 [2024-10-01 15:28:28.751600] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:30.394 [2024-10-01 15:28:28.751614] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:30.394 [2024-10-01 15:28:28.751626] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:30.394 [2024-10-01 15:28:28.751640] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:30.394 [2024-10-01 15:28:28.751651] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:30.394 [2024-10-01 15:28:28.751662] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:30.394 [2024-10-01 15:28:28.751676] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:30.394 [2024-10-01 15:28:28.751686] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:30.394 [2024-10-01 15:28:28.751695] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:30.394 [2024-10-01 15:28:28.751706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.751715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:30.394 [2024-10-01 15:28:28.751735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:28:30.394 [2024-10-01 15:28:28.751746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.394 [2024-10-01 15:28:28.751821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.394 [2024-10-01 15:28:28.751832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:30.395 [2024-10-01 15:28:28.751842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:30.395 [2024-10-01 15:28:28.751855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.395 [2024-10-01 15:28:28.751941] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:30.395 [2024-10-01 15:28:28.751954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:30.395 [2024-10-01 15:28:28.751965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.395 [2024-10-01 15:28:28.751975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.395 [2024-10-01 15:28:28.751985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:30.395 [2024-10-01 15:28:28.752004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:30.395 [2024-10-01 15:28:28.752023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:30.395 [2024-10-01 15:28:28.752033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.395 [2024-10-01 15:28:28.752051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:30.395 [2024-10-01 15:28:28.752061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:30.395 [2024-10-01 15:28:28.752076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.395 [2024-10-01 15:28:28.752085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:30.395 [2024-10-01 15:28:28.752094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:30.395 [2024-10-01 15:28:28.752103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:30.395 [2024-10-01 15:28:28.752122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:30.395 [2024-10-01 15:28:28.752131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:30.395 [2024-10-01 15:28:28.752150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.395 [2024-10-01 15:28:28.752168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:30.395 [2024-10-01 15:28:28.752190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.395 [2024-10-01 15:28:28.752208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:30.395 [2024-10-01 15:28:28.752218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.395 [2024-10-01 15:28:28.752240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:30.395 [2024-10-01 15:28:28.752249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.395 [2024-10-01 15:28:28.752268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:30.395 [2024-10-01 15:28:28.752278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.395 [2024-10-01 15:28:28.752296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:30.395 [2024-10-01 15:28:28.752305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:30.395 [2024-10-01 15:28:28.752314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.395 [2024-10-01 15:28:28.752324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:30.395 [2024-10-01 15:28:28.752333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:30.395 [2024-10-01 15:28:28.752342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:30.395 [2024-10-01 15:28:28.752360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:30.395 [2024-10-01 15:28:28.752369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752378] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:30.395 [2024-10-01 15:28:28.752392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:30.395 [2024-10-01 15:28:28.752402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.395 [2024-10-01 15:28:28.752412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.395 [2024-10-01 15:28:28.752422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:30.395 [2024-10-01 15:28:28.752431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:30.395 [2024-10-01 15:28:28.752441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:30.395 [2024-10-01 15:28:28.752450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:30.395 [2024-10-01 15:28:28.752459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:30.395 [2024-10-01 15:28:28.752468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:30.395 [2024-10-01 15:28:28.752480] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:30.395 [2024-10-01 15:28:28.752496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.395 [2024-10-01 15:28:28.752509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:30.395 [2024-10-01 15:28:28.752520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:30.395 [2024-10-01 15:28:28.752530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:30.395 [2024-10-01 15:28:28.752540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:30.395 [2024-10-01 15:28:28.752551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:30.395 [2024-10-01 15:28:28.752566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:30.395 [2024-10-01 15:28:28.752576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:30.395 [2024-10-01 15:28:28.752586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:30.395 [2024-10-01 15:28:28.752596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:30.395 [2024-10-01 15:28:28.752606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:30.395 [2024-10-01 15:28:28.752616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:30.395 [2024-10-01 15:28:28.752626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:30.395 [2024-10-01 15:28:28.752636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:30.395 [2024-10-01 15:28:28.752646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:30.395 [2024-10-01 15:28:28.752656] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:30.395 [2024-10-01 15:28:28.752667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.395 [2024-10-01 15:28:28.752678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:30.395 [2024-10-01 15:28:28.752689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:30.395 [2024-10-01 15:28:28.752700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:30.395 [2024-10-01 15:28:28.752710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:30.395 [2024-10-01 15:28:28.752721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.395 [2024-10-01 15:28:28.752734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:30.395 [2024-10-01 15:28:28.752751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:28:30.395 [2024-10-01 15:28:28.752768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.395 [2024-10-01 15:28:28.769861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.395 [2024-10-01 15:28:28.769923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:30.395 [2024-10-01 15:28:28.769944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.048 ms 00:28:30.395 [2024-10-01 15:28:28.769954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.395 [2024-10-01 15:28:28.770055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.395 [2024-10-01 15:28:28.770067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:30.395 [2024-10-01 15:28:28.770077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:30.395 [2024-10-01 15:28:28.770088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.395 [2024-10-01 15:28:28.781884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.395 [2024-10-01 15:28:28.781945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:30.395 [2024-10-01 15:28:28.781969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.733 ms 00:28:30.395 [2024-10-01 15:28:28.781983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.395 [2024-10-01 15:28:28.782048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.395 [2024-10-01 15:28:28.782076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:30.395 [2024-10-01 15:28:28.782090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:30.395 [2024-10-01 15:28:28.782103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.395 [2024-10-01 15:28:28.782279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.395 [2024-10-01 15:28:28.782297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:30.395 [2024-10-01 15:28:28.782321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:30.395 [2024-10-01 15:28:28.782340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.782493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.782512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:30.396 [2024-10-01 15:28:28.782523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:28:30.396 [2024-10-01 15:28:28.782533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.788553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.788608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:30.396 [2024-10-01 15:28:28.788623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.005 ms 00:28:30.396 [2024-10-01 15:28:28.788634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.788792] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:30.396 [2024-10-01 15:28:28.788809] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:30.396 [2024-10-01 15:28:28.788825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.788843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:30.396 [2024-10-01 15:28:28.788863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:30.396 [2024-10-01 15:28:28.788873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.800132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.800205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:30.396 [2024-10-01 15:28:28.800221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.254 ms 00:28:30.396 [2024-10-01 15:28:28.800231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.800366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.800379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:30.396 [2024-10-01 15:28:28.800402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:28:30.396 [2024-10-01 15:28:28.800412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.800485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.800497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:30.396 [2024-10-01 15:28:28.800512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:28:30.396 [2024-10-01 15:28:28.800525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.800842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.800867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:30.396 [2024-10-01 15:28:28.800879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:28:30.396 [2024-10-01 15:28:28.800889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.800917] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:30.396 [2024-10-01 15:28:28.800930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.800941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:30.396 [2024-10-01 15:28:28.800951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:30.396 [2024-10-01 15:28:28.800972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.808676] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:30.396 [2024-10-01 15:28:28.808904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.808919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:30.396 [2024-10-01 15:28:28.808932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.919 ms 00:28:30.396 [2024-10-01 15:28:28.808942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.811160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.811207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:30.396 [2024-10-01 15:28:28.811219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.189 ms 00:28:30.396 [2024-10-01 15:28:28.811229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.811312] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:28:30.396 [2024-10-01 15:28:28.811878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.811897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:30.396 [2024-10-01 15:28:28.811908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:28:30.396 [2024-10-01 15:28:28.811919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.811979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.811991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:30.396 [2024-10-01 15:28:28.812001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:30.396 [2024-10-01 15:28:28.812010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.812052] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:30.396 [2024-10-01 15:28:28.812065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.812075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:30.396 [2024-10-01 15:28:28.812085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:30.396 [2024-10-01 15:28:28.812095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.816439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.816487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:30.396 [2024-10-01 15:28:28.816508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.330 ms 00:28:30.396 [2024-10-01 15:28:28.816518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.816609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.396 [2024-10-01 15:28:28.816623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:30.396 [2024-10-01 15:28:28.816635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:30.396 [2024-10-01 15:28:28.816662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.396 [2024-10-01 15:28:28.817879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 70.094 ms, result 0 00:29:07.653  Copying: 25/1024 [MB] (25 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (26 MBps) Copying: 105/1024 [MB] (27 MBps) Copying: 133/1024 [MB] (27 MBps) Copying: 161/1024 [MB] (28 MBps) Copying: 188/1024 [MB] (26 MBps) Copying: 215/1024 [MB] (27 MBps) Copying: 241/1024 [MB] (26 MBps) Copying: 268/1024 [MB] (26 MBps) Copying: 294/1024 [MB] (26 MBps) Copying: 320/1024 [MB] (25 MBps) Copying: 347/1024 [MB] (26 MBps) Copying: 373/1024 [MB] (25 MBps) Copying: 399/1024 [MB] (26 MBps) Copying: 426/1024 [MB] (26 MBps) Copying: 453/1024 [MB] (27 MBps) Copying: 479/1024 [MB] (26 MBps) Copying: 507/1024 [MB] (27 MBps) Copying: 533/1024 [MB] (26 MBps) Copying: 559/1024 [MB] (25 MBps) Copying: 587/1024 [MB] (28 MBps) Copying: 616/1024 [MB] (28 MBps) Copying: 647/1024 [MB] (31 MBps) Copying: 679/1024 [MB] (32 MBps) Copying: 708/1024 [MB] (29 MBps) Copying: 738/1024 [MB] (29 MBps) Copying: 766/1024 [MB] (28 MBps) Copying: 796/1024 [MB] (29 MBps) Copying: 827/1024 [MB] (30 MBps) Copying: 856/1024 [MB] (29 MBps) Copying: 886/1024 [MB] (29 MBps) Copying: 915/1024 [MB] (28 MBps) Copying: 944/1024 [MB] (29 MBps) Copying: 974/1024 [MB] (29 MBps) Copying: 1004/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-01 15:29:06.041176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.653 [2024-10-01 15:29:06.041701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:07.653 [2024-10-01 15:29:06.041828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:07.653 [2024-10-01 15:29:06.041878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.653 [2024-10-01 15:29:06.042010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:07.653 [2024-10-01 15:29:06.042849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.653 [2024-10-01 15:29:06.042980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:07.653 [2024-10-01 15:29:06.043079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:29:07.653 [2024-10-01 15:29:06.043142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.653 [2024-10-01 15:29:06.043688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.653 [2024-10-01 15:29:06.043828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:07.653 [2024-10-01 15:29:06.043927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:29:07.653 [2024-10-01 15:29:06.043975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.653 [2024-10-01 15:29:06.044082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.653 [2024-10-01 15:29:06.044206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:07.653 [2024-10-01 15:29:06.044257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:07.653 [2024-10-01 15:29:06.044306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.653 [2024-10-01 15:29:06.044496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.653 [2024-10-01 15:29:06.044542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:07.653 [2024-10-01 15:29:06.044654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:07.653 [2024-10-01 15:29:06.044706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.653 [2024-10-01 15:29:06.044816] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:07.653 [2024-10-01 15:29:06.044866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:07.653 [2024-10-01 15:29:06.044990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.045992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:07.653 [2024-10-01 15:29:06.046582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.046985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.047995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.048006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:07.654 [2024-10-01 15:29:06.048027] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:07.654 [2024-10-01 15:29:06.048045] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e5b3aa04-2dd4-4ab1-bb95-898329cbe42d 00:29:07.654 [2024-10-01 15:29:06.048068] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:07.654 [2024-10-01 15:29:06.048090] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5152 00:29:07.654 [2024-10-01 15:29:06.048101] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5120 00:29:07.654 [2024-10-01 15:29:06.048113] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0063 00:29:07.654 [2024-10-01 15:29:06.048124] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:07.654 [2024-10-01 15:29:06.048143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:07.654 [2024-10-01 15:29:06.048162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:07.654 [2024-10-01 15:29:06.048185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:07.654 [2024-10-01 15:29:06.048195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:07.654 [2024-10-01 15:29:06.048207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.654 [2024-10-01 15:29:06.048218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:07.654 [2024-10-01 15:29:06.048230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.398 ms 00:29:07.654 [2024-10-01 15:29:06.048241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.654 [2024-10-01 15:29:06.051012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.654 [2024-10-01 15:29:06.051043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:07.654 [2024-10-01 15:29:06.051056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.748 ms 00:29:07.654 [2024-10-01 15:29:06.051307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.654 [2024-10-01 15:29:06.051423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.654 [2024-10-01 15:29:06.051435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:07.654 [2024-10-01 15:29:06.051447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:07.654 [2024-10-01 15:29:06.051458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.654 [2024-10-01 15:29:06.059018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.654 [2024-10-01 15:29:06.059072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:07.654 [2024-10-01 15:29:06.059093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.654 [2024-10-01 15:29:06.059104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.654 [2024-10-01 15:29:06.059167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.654 [2024-10-01 15:29:06.059437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:07.654 [2024-10-01 15:29:06.059450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.654 [2024-10-01 15:29:06.059460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.654 [2024-10-01 15:29:06.059531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.654 [2024-10-01 15:29:06.059553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:07.654 [2024-10-01 15:29:06.059571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.654 [2024-10-01 15:29:06.059595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.654 [2024-10-01 15:29:06.059622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.654 [2024-10-01 15:29:06.059639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:07.654 [2024-10-01 15:29:06.059656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.655 [2024-10-01 15:29:06.059673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.655 [2024-10-01 15:29:06.073628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.655 [2024-10-01 15:29:06.073685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:07.655 [2024-10-01 15:29:06.073708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.655 [2024-10-01 15:29:06.073719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.655 [2024-10-01 15:29:06.083575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.655 [2024-10-01 15:29:06.083628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:07.655 [2024-10-01 15:29:06.083655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.655 [2024-10-01 15:29:06.083667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.655 [2024-10-01 15:29:06.083731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.655 [2024-10-01 15:29:06.083742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:07.655 [2024-10-01 15:29:06.083754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.655 [2024-10-01 15:29:06.083765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.655 [2024-10-01 15:29:06.083811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.655 [2024-10-01 15:29:06.083829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:07.655 [2024-10-01 15:29:06.083840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.655 [2024-10-01 15:29:06.083850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.655 [2024-10-01 15:29:06.083913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.655 [2024-10-01 15:29:06.083925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:07.655 [2024-10-01 15:29:06.083943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.655 [2024-10-01 15:29:06.083953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.655 [2024-10-01 15:29:06.083984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.655 [2024-10-01 15:29:06.084005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:07.655 [2024-10-01 15:29:06.084015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.655 [2024-10-01 15:29:06.084026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.655 [2024-10-01 15:29:06.084070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.655 [2024-10-01 15:29:06.084083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:07.655 [2024-10-01 15:29:06.084094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.655 [2024-10-01 15:29:06.084104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.655 [2024-10-01 15:29:06.084152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.655 [2024-10-01 15:29:06.084164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:07.655 [2024-10-01 15:29:06.084188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.655 [2024-10-01 15:29:06.084198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.655 [2024-10-01 15:29:06.084323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 43.189 ms, result 0 00:29:07.914 00:29:07.914 00:29:07.914 15:29:06 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:09.815 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92170 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92170 ']' 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92170 00:29:09.815 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92170) - No such process 00:29:09.815 Process with pid 92170 is not found 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 92170 is not found' 00:29:09.815 Remove shared memory files 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_band_md /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_l2p_l1 /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_l2p_l2 /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_l2p_l2_ctx /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_nvc_md /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_p2l_pool /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_sb /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_sb_shm /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_trim_bitmap /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_trim_log /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_trim_md /dev/hugepages/ftl_e5b3aa04-2dd4-4ab1-bb95-898329cbe42d_vmap 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:29:09.815 00:29:09.815 real 2m48.929s 00:29:09.815 user 2m37.230s 00:29:09.815 sys 0m13.418s 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:09.815 15:29:08 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:09.815 ************************************ 00:29:09.815 END TEST ftl_restore_fast 00:29:09.815 ************************************ 00:29:09.815 15:29:08 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:09.815 15:29:08 ftl -- ftl/ftl.sh@14 -- # killprocess 85331 00:29:09.815 15:29:08 ftl -- common/autotest_common.sh@950 -- # '[' -z 85331 ']' 00:29:09.815 15:29:08 ftl -- common/autotest_common.sh@954 -- # kill -0 85331 00:29:09.815 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85331) - No such process 00:29:09.815 Process with pid 85331 is not found 00:29:09.815 15:29:08 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 85331 is not found' 00:29:09.815 15:29:08 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:10.074 15:29:08 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=93917 00:29:10.074 15:29:08 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:10.074 15:29:08 ftl -- ftl/ftl.sh@20 -- # waitforlisten 93917 00:29:10.074 15:29:08 ftl -- common/autotest_common.sh@831 -- # '[' -z 93917 ']' 00:29:10.074 15:29:08 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.074 15:29:08 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:10.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.074 15:29:08 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.074 15:29:08 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:10.074 15:29:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:10.074 [2024-10-01 15:29:08.464839] Starting SPDK v25.01-pre git sha1 e9b861378 / DPDK 23.11.0 initialization... 00:29:10.074 [2024-10-01 15:29:08.464995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93917 ] 00:29:10.333 [2024-10-01 15:29:08.633447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.333 [2024-10-01 15:29:08.683486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.906 15:29:09 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:10.906 15:29:09 ftl -- common/autotest_common.sh@864 -- # return 0 00:29:10.906 15:29:09 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:11.206 nvme0n1 00:29:11.206 15:29:09 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:11.206 15:29:09 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:11.206 15:29:09 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:11.465 15:29:09 ftl -- ftl/common.sh@28 -- # stores=06132b78-7f03-4545-9f63-20222a2fd68e 00:29:11.465 15:29:09 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:11.465 15:29:09 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 06132b78-7f03-4545-9f63-20222a2fd68e 00:29:11.724 15:29:10 ftl -- ftl/ftl.sh@23 -- # killprocess 93917 00:29:11.724 15:29:10 ftl -- common/autotest_common.sh@950 -- # '[' -z 93917 ']' 00:29:11.724 15:29:10 ftl -- common/autotest_common.sh@954 -- # kill -0 93917 00:29:11.724 15:29:10 ftl -- common/autotest_common.sh@955 -- # uname 00:29:11.724 15:29:10 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.724 15:29:10 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93917 00:29:11.724 15:29:10 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:11.724 killing process with pid 93917 00:29:11.724 15:29:10 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:11.724 15:29:10 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93917' 00:29:11.724 15:29:10 ftl -- common/autotest_common.sh@969 -- # kill 93917 00:29:11.724 15:29:10 ftl -- common/autotest_common.sh@974 -- # wait 93917 00:29:11.983 15:29:10 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:12.242 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:12.501 Waiting for block devices as requested 00:29:12.501 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:12.501 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:12.760 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:12.760 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:18.071 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:18.071 15:29:16 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:18.071 15:29:16 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:18.071 Remove shared memory files 00:29:18.071 15:29:16 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:18.071 15:29:16 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:18.071 15:29:16 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:18.071 15:29:16 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:18.071 15:29:16 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:18.071 00:29:18.071 real 12m31.186s 00:29:18.071 user 14m36.238s 00:29:18.071 sys 1m40.801s 00:29:18.071 15:29:16 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:18.071 15:29:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:18.071 ************************************ 00:29:18.071 END TEST ftl 00:29:18.071 ************************************ 00:29:18.071 15:29:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:18.071 15:29:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:18.071 15:29:16 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:18.071 15:29:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:18.071 15:29:16 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:18.071 15:29:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:18.071 15:29:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:18.071 15:29:16 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:18.071 15:29:16 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:18.071 15:29:16 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:18.071 15:29:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:18.071 15:29:16 -- common/autotest_common.sh@10 -- # set +x 00:29:18.071 15:29:16 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:18.071 15:29:16 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:18.071 15:29:16 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:18.071 15:29:16 -- common/autotest_common.sh@10 -- # set +x 00:29:19.979 INFO: APP EXITING 00:29:19.979 INFO: killing all VMs 00:29:19.979 INFO: killing vhost app 00:29:19.979 INFO: EXIT DONE 00:29:20.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:20.832 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:20.832 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:20.832 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:29:20.832 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:21.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:21.965 Cleaning 00:29:21.965 Removing: /var/run/dpdk/spdk0/config 00:29:21.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:21.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:21.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:21.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:21.965 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:21.965 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:21.965 Removing: /var/run/dpdk/spdk0 00:29:21.965 Removing: /var/run/dpdk/spdk_pid70488 00:29:21.965 Removing: /var/run/dpdk/spdk_pid70651 00:29:21.965 Removing: /var/run/dpdk/spdk_pid70858 00:29:21.965 Removing: /var/run/dpdk/spdk_pid70946 00:29:21.965 Removing: /var/run/dpdk/spdk_pid70974 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71086 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71104 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71292 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71371 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71445 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71545 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71631 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71667 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71709 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71774 00:29:21.965 Removing: /var/run/dpdk/spdk_pid71891 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72327 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72380 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72432 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72448 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72517 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72533 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72602 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72618 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72671 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72689 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72731 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72749 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72882 00:29:21.965 Removing: /var/run/dpdk/spdk_pid72918 00:29:21.965 Removing: /var/run/dpdk/spdk_pid73002 00:29:21.965 Removing: /var/run/dpdk/spdk_pid73168 00:29:21.965 Removing: /var/run/dpdk/spdk_pid73241 00:29:21.965 Removing: /var/run/dpdk/spdk_pid73272 00:29:21.965 Removing: /var/run/dpdk/spdk_pid73700 00:29:21.965 Removing: /var/run/dpdk/spdk_pid73787 00:29:21.965 Removing: /var/run/dpdk/spdk_pid73885 00:29:21.965 Removing: /var/run/dpdk/spdk_pid73927 00:29:21.965 Removing: /var/run/dpdk/spdk_pid73947 00:29:21.965 Removing: /var/run/dpdk/spdk_pid74031 00:29:21.965 Removing: /var/run/dpdk/spdk_pid74656 00:29:21.965 Removing: /var/run/dpdk/spdk_pid74687 00:29:21.965 Removing: /var/run/dpdk/spdk_pid75163 00:29:21.965 Removing: /var/run/dpdk/spdk_pid75250 00:29:21.965 Removing: /var/run/dpdk/spdk_pid75354 00:29:21.965 Removing: /var/run/dpdk/spdk_pid75396 00:29:21.965 Removing: /var/run/dpdk/spdk_pid75416 00:29:21.965 Removing: /var/run/dpdk/spdk_pid75447 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77311 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77426 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77436 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77453 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77492 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77496 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77508 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77558 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77562 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77574 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77624 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77628 00:29:21.965 Removing: /var/run/dpdk/spdk_pid77640 00:29:21.965 Removing: /var/run/dpdk/spdk_pid79041 00:29:21.965 Removing: /var/run/dpdk/spdk_pid79127 00:29:21.965 Removing: /var/run/dpdk/spdk_pid80538 00:29:21.965 Removing: /var/run/dpdk/spdk_pid81890 00:29:21.965 Removing: /var/run/dpdk/spdk_pid81955 00:29:22.223 Removing: /var/run/dpdk/spdk_pid82020 00:29:22.223 Removing: /var/run/dpdk/spdk_pid82088 00:29:22.223 Removing: /var/run/dpdk/spdk_pid82171 00:29:22.223 Removing: /var/run/dpdk/spdk_pid82234 00:29:22.223 Removing: /var/run/dpdk/spdk_pid82376 00:29:22.223 Removing: /var/run/dpdk/spdk_pid82731 00:29:22.223 Removing: /var/run/dpdk/spdk_pid82762 00:29:22.223 Removing: /var/run/dpdk/spdk_pid83204 00:29:22.223 Removing: /var/run/dpdk/spdk_pid83383 00:29:22.223 Removing: /var/run/dpdk/spdk_pid83471 00:29:22.223 Removing: /var/run/dpdk/spdk_pid83565 00:29:22.223 Removing: /var/run/dpdk/spdk_pid83608 00:29:22.223 Removing: /var/run/dpdk/spdk_pid83628 00:29:22.223 Removing: /var/run/dpdk/spdk_pid83931 00:29:22.223 Removing: /var/run/dpdk/spdk_pid83969 00:29:22.223 Removing: /var/run/dpdk/spdk_pid84025 00:29:22.223 Removing: /var/run/dpdk/spdk_pid84397 00:29:22.223 Removing: /var/run/dpdk/spdk_pid84537 00:29:22.223 Removing: /var/run/dpdk/spdk_pid85331 00:29:22.223 Removing: /var/run/dpdk/spdk_pid85452 00:29:22.223 Removing: /var/run/dpdk/spdk_pid85648 00:29:22.223 Removing: /var/run/dpdk/spdk_pid85730 00:29:22.223 Removing: /var/run/dpdk/spdk_pid86071 00:29:22.223 Removing: /var/run/dpdk/spdk_pid86319 00:29:22.223 Removing: /var/run/dpdk/spdk_pid86679 00:29:22.223 Removing: /var/run/dpdk/spdk_pid86861 00:29:22.223 Removing: /var/run/dpdk/spdk_pid86969 00:29:22.223 Removing: /var/run/dpdk/spdk_pid87011 00:29:22.223 Removing: /var/run/dpdk/spdk_pid87126 00:29:22.223 Removing: /var/run/dpdk/spdk_pid87140 00:29:22.223 Removing: /var/run/dpdk/spdk_pid87182 00:29:22.223 Removing: /var/run/dpdk/spdk_pid87364 00:29:22.223 Removing: /var/run/dpdk/spdk_pid87573 00:29:22.223 Removing: /var/run/dpdk/spdk_pid87962 00:29:22.223 Removing: /var/run/dpdk/spdk_pid88365 00:29:22.223 Removing: /var/run/dpdk/spdk_pid88731 00:29:22.223 Removing: /var/run/dpdk/spdk_pid89167 00:29:22.223 Removing: /var/run/dpdk/spdk_pid89299 00:29:22.223 Removing: /var/run/dpdk/spdk_pid89381 00:29:22.223 Removing: /var/run/dpdk/spdk_pid89958 00:29:22.223 Removing: /var/run/dpdk/spdk_pid90025 00:29:22.223 Removing: /var/run/dpdk/spdk_pid90449 00:29:22.223 Removing: /var/run/dpdk/spdk_pid90797 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91276 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91398 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91424 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91477 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91527 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91580 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91747 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91816 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91866 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91947 00:29:22.223 Removing: /var/run/dpdk/spdk_pid91982 00:29:22.223 Removing: /var/run/dpdk/spdk_pid92032 00:29:22.223 Removing: /var/run/dpdk/spdk_pid92170 00:29:22.223 Removing: /var/run/dpdk/spdk_pid92367 00:29:22.223 Removing: /var/run/dpdk/spdk_pid92737 00:29:22.223 Removing: /var/run/dpdk/spdk_pid93099 00:29:22.223 Removing: /var/run/dpdk/spdk_pid93491 00:29:22.223 Removing: /var/run/dpdk/spdk_pid93917 00:29:22.223 Clean 00:29:22.481 15:29:20 -- common/autotest_common.sh@1451 -- # return 0 00:29:22.481 15:29:20 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:22.481 15:29:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.481 15:29:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.481 15:29:20 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:22.481 15:29:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.481 15:29:20 -- common/autotest_common.sh@10 -- # set +x 00:29:22.481 15:29:20 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:22.481 15:29:20 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:22.481 15:29:20 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:22.481 15:29:20 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:22.481 15:29:20 -- spdk/autotest.sh@394 -- # hostname 00:29:22.481 15:29:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:22.738 geninfo: WARNING: invalid characters removed from testname! 00:29:49.314 15:29:47 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:52.603 15:29:50 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:54.515 15:29:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:56.420 15:29:54 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:58.953 15:29:57 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:01.480 15:29:59 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:03.382 15:30:01 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:03.382 15:30:01 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:30:03.382 15:30:01 -- common/autotest_common.sh@1681 -- $ lcov --version 00:30:03.382 15:30:01 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:30:03.382 15:30:01 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:30:03.382 15:30:01 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:30:03.382 15:30:01 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:30:03.382 15:30:01 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:30:03.382 15:30:01 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:03.382 15:30:01 -- scripts/common.sh@336 -- $ read -ra ver1 00:30:03.382 15:30:01 -- scripts/common.sh@337 -- $ IFS=.-: 00:30:03.382 15:30:01 -- scripts/common.sh@337 -- $ read -ra ver2 00:30:03.382 15:30:01 -- scripts/common.sh@338 -- $ local 'op=<' 00:30:03.382 15:30:01 -- scripts/common.sh@340 -- $ ver1_l=2 00:30:03.382 15:30:01 -- scripts/common.sh@341 -- $ ver2_l=1 00:30:03.382 15:30:01 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:30:03.382 15:30:01 -- scripts/common.sh@344 -- $ case "$op" in 00:30:03.382 15:30:01 -- scripts/common.sh@345 -- $ : 1 00:30:03.382 15:30:01 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:30:03.382 15:30:01 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.382 15:30:01 -- scripts/common.sh@365 -- $ decimal 1 00:30:03.382 15:30:01 -- scripts/common.sh@353 -- $ local d=1 00:30:03.382 15:30:01 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:03.382 15:30:01 -- scripts/common.sh@355 -- $ echo 1 00:30:03.382 15:30:01 -- scripts/common.sh@365 -- $ ver1[v]=1 00:30:03.382 15:30:01 -- scripts/common.sh@366 -- $ decimal 2 00:30:03.382 15:30:01 -- scripts/common.sh@353 -- $ local d=2 00:30:03.382 15:30:01 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:03.382 15:30:01 -- scripts/common.sh@355 -- $ echo 2 00:30:03.382 15:30:01 -- scripts/common.sh@366 -- $ ver2[v]=2 00:30:03.382 15:30:01 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:30:03.382 15:30:01 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:30:03.382 15:30:01 -- scripts/common.sh@368 -- $ return 0 00:30:03.382 15:30:01 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.382 15:30:01 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:30:03.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.382 --rc genhtml_branch_coverage=1 00:30:03.382 --rc genhtml_function_coverage=1 00:30:03.382 --rc genhtml_legend=1 00:30:03.382 --rc geninfo_all_blocks=1 00:30:03.382 --rc geninfo_unexecuted_blocks=1 00:30:03.382 00:30:03.382 ' 00:30:03.382 15:30:01 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:30:03.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.382 --rc genhtml_branch_coverage=1 00:30:03.382 --rc genhtml_function_coverage=1 00:30:03.382 --rc genhtml_legend=1 00:30:03.382 --rc geninfo_all_blocks=1 00:30:03.382 --rc geninfo_unexecuted_blocks=1 00:30:03.382 00:30:03.382 ' 00:30:03.382 15:30:01 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:30:03.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.382 --rc genhtml_branch_coverage=1 00:30:03.382 --rc genhtml_function_coverage=1 00:30:03.382 --rc genhtml_legend=1 00:30:03.382 --rc geninfo_all_blocks=1 00:30:03.382 --rc geninfo_unexecuted_blocks=1 00:30:03.382 00:30:03.382 ' 00:30:03.382 15:30:01 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:30:03.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.382 --rc genhtml_branch_coverage=1 00:30:03.382 --rc genhtml_function_coverage=1 00:30:03.382 --rc genhtml_legend=1 00:30:03.382 --rc geninfo_all_blocks=1 00:30:03.382 --rc geninfo_unexecuted_blocks=1 00:30:03.382 00:30:03.382 ' 00:30:03.382 15:30:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:03.382 15:30:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:30:03.382 15:30:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:03.382 15:30:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.382 15:30:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.382 15:30:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.382 15:30:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.383 15:30:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.383 15:30:01 -- paths/export.sh@5 -- $ export PATH 00:30:03.383 15:30:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.383 15:30:01 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:03.383 15:30:01 -- common/autobuild_common.sh@479 -- $ date +%s 00:30:03.383 15:30:01 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727796601.XXXXXX 00:30:03.383 15:30:01 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727796601.DJTcuB 00:30:03.383 15:30:01 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:30:03.383 15:30:01 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:30:03.383 15:30:01 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:30:03.383 15:30:01 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:30:03.383 15:30:01 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:03.383 15:30:01 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:03.383 15:30:01 -- common/autobuild_common.sh@495 -- $ get_config_params 00:30:03.383 15:30:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:30:03.383 15:30:01 -- common/autotest_common.sh@10 -- $ set +x 00:30:03.383 15:30:01 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:30:03.383 15:30:01 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:30:03.383 15:30:01 -- pm/common@17 -- $ local monitor 00:30:03.383 15:30:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:03.383 15:30:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:03.383 15:30:01 -- pm/common@25 -- $ sleep 1 00:30:03.383 15:30:01 -- pm/common@21 -- $ date +%s 00:30:03.383 15:30:01 -- pm/common@21 -- $ date +%s 00:30:03.383 15:30:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727796601 00:30:03.383 15:30:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727796601 00:30:03.641 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727796601_collect-vmstat.pm.log 00:30:03.641 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727796601_collect-cpu-load.pm.log 00:30:04.578 15:30:02 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:30:04.578 15:30:02 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:30:04.578 15:30:02 -- spdk/autopackage.sh@14 -- $ timing_finish 00:30:04.578 15:30:02 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:04.578 15:30:02 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:04.578 15:30:02 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:04.578 15:30:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:04.578 15:30:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:04.578 15:30:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:04.578 15:30:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:04.578 15:30:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:04.578 15:30:02 -- pm/common@44 -- $ pid=95623 00:30:04.578 15:30:02 -- pm/common@50 -- $ kill -TERM 95623 00:30:04.578 15:30:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:04.578 15:30:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:04.578 15:30:02 -- pm/common@44 -- $ pid=95625 00:30:04.578 15:30:02 -- pm/common@50 -- $ kill -TERM 95625 00:30:04.578 + [[ -n 5981 ]] 00:30:04.578 + sudo kill 5981 00:30:04.587 [Pipeline] } 00:30:04.603 [Pipeline] // timeout 00:30:04.608 [Pipeline] } 00:30:04.623 [Pipeline] // stage 00:30:04.628 [Pipeline] } 00:30:04.642 [Pipeline] // catchError 00:30:04.651 [Pipeline] stage 00:30:04.653 [Pipeline] { (Stop VM) 00:30:04.665 [Pipeline] sh 00:30:04.946 + vagrant halt 00:30:08.235 ==> default: Halting domain... 00:30:14.809 [Pipeline] sh 00:30:15.137 + vagrant destroy -f 00:30:18.425 ==> default: Removing domain... 00:30:18.438 [Pipeline] sh 00:30:18.723 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:30:18.733 [Pipeline] } 00:30:18.748 [Pipeline] // stage 00:30:18.754 [Pipeline] } 00:30:18.769 [Pipeline] // dir 00:30:18.775 [Pipeline] } 00:30:18.789 [Pipeline] // wrap 00:30:18.796 [Pipeline] } 00:30:18.809 [Pipeline] // catchError 00:30:18.820 [Pipeline] stage 00:30:18.822 [Pipeline] { (Epilogue) 00:30:18.836 [Pipeline] sh 00:30:19.120 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:24.411 [Pipeline] catchError 00:30:24.414 [Pipeline] { 00:30:24.429 [Pipeline] sh 00:30:24.716 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:24.976 Artifacts sizes are good 00:30:24.986 [Pipeline] } 00:30:25.001 [Pipeline] // catchError 00:30:25.014 [Pipeline] archiveArtifacts 00:30:25.021 Archiving artifacts 00:30:25.176 [Pipeline] cleanWs 00:30:25.241 [WS-CLEANUP] Deleting project workspace... 00:30:25.241 [WS-CLEANUP] Deferred wipeout is used... 00:30:25.245 [WS-CLEANUP] done 00:30:25.247 [Pipeline] } 00:30:25.260 [Pipeline] // stage 00:30:25.264 [Pipeline] } 00:30:25.275 [Pipeline] // node 00:30:25.280 [Pipeline] End of Pipeline 00:30:25.320 Finished: SUCCESS